How to assign each item with the keys in map - string

I'm storing (non personal) data in a list of Strings which is given values by the user once an action is carried out.
Since the List is only given values once the user performs the action I don't know how many items there will be.
I'm trying to sort this List into two types of data:
Strings and DateTime
I've been trying to convert the list into a Map but I'm not sure how to assign each item with the keys "appname" and "durations". How would do you this?
Please suggest a better way of solving this issue, if there is one?
***Updated
Data example:
YouTube
1h 20m on screen - 2m background
1h 22m
Chrome
1h 26m on screen - 10m background
1h 36m
Google Maps
3h 4m on screen - 2h 54m background
5h 58m
The data is stored using a final List<String> _usage.
** Update: added example of what data should be parsed and how to access it.
Im looking to parse this data as json:
{"app": "YouTube", "duration": "1h 22m"},
//Removes the'1h 20m on screen - 2m background'
{"app": "Chrome", "duration": "1h 36m"},
//Removes the '1h 26m on screen - 10m background'
{"app": "Google Maps", "duration": "5h 58m")"}
//Removes the '3h 4m on screen - 2h 54m background'
I'd preferably like to access it through a Map or HashMap type.

I guess you could make something like this but it is difficult to make a stable solution without knowing all pitfalls in the data. E.g. I assume we always can guarantee that the name of the app comes first and then at some point some duration:
List<String> _usage = [
'YouTube',
'1h 20m on screen - 2m background',
'1h 22m',
'Chrome',
'1h 26m on screen - 10m background',
'1h 36m',
'Google Maps',
'3h 4m on screen - 2h 54m background',
'5h 58m'
];
void main(List arguments) {
final map = <String, Duration>{};
for (var i = 0; i < _usage.length; i += 3) {
final app = _usage[i];
final duration = _parseDuration(_usage[i + 2]);
if (map.containsKey(app)) {
map[app] += duration;
} else {
map[app] = duration;
}
}
// {YouTube: 1:22:00.000000, Chrome: 1:36:00.000000, Google Maps: 5:58:00.000000}
print(map);
}
final _regExp = RegExp(r'(?<hours>\d+)h (?<minutes>\d+)m');
Duration _parseDuration(String line) {
final match = _regExp.firstMatch(line);
if (match == null) {
throw Exception('Could not get duration from: $line');
}
return Duration(
hours: int.parse(match.namedGroup('hours')),
minutes: int.parse(match.namedGroup('minutes')));
}
I also added to the solution so if the same app name comes up, we add the duration instead of overwriting the previous value.

Related

Is there a way to check the volume level of all processes with pipewire/pulseaudio?

I'm trying to find a way to check if i have any desktop audio AND what processes is producing sounds.
After some searching i found a way to list all the sink input in pipewire/pulseaudio using pactl list sink-inputs however i have no idea if that input is muted or not
example output:
Sink Input #512
Driver: protocol-native.c
Owner Module: 9
Client: 795
Sink: 1
Sample Specification: float32le 2ch 48000Hz
Channel Map: front-left,front-right
Format: pcm, format.sample_format = "\"float32le\"" format.rate = "48000" format.channels = "2" format.channel_map = "\"front-left,front-right\""
Corked: yes
Mute: no
Volume: front-left: 43565 / 66% / -10.64 dB, front-right: 43565 / 66% / -10.64 dB
balance 0.00
Buffer Latency: 165979 usec
Sink Latency: 75770 usec
Resample method: speex-float-1
Properties:
media.name = "Polish cow (English Lyrics Full Version) - YouTube"
application.name = "Firefox"
native-protocol.peer = "UNIX socket client"
native-protocol.version = "35"
application.process.id = "612271"
application.process.user = "user"
application.process.host = "host"
application.process.binary = "firefox"
application.language = "en_US.UTF-8"
window.x11.display = ":0"
application.process.machine_id = "93e71eeba04e43789f0972b7ea0e4b39"
application.process.session_id = "2"
application.icon_name = "firefox"
module-stream-restore.id = "sink-input-by-application-name:Firefox"
The obvious thing would be looking at the Mute and Volume line but that is not reliable at all, currently the youtube video is paused but Mute is show as no and Volume is still no different from when the youtube video is actually playing.
I need the solution to be script-able since I'll muting certain thing when there is another process that is making sounds, and play it again when there is no sound, using bash script. If it is not possible on pipewire/pulseaudio but it is possible with another sound server then please do tell me.

NodeJS Sharp Provides Incorrect Quality For JPEG Images

I am working with sharp node package to calculate the quality of an image.
I couldn't find any API in the package which would calculate the quality.
So I came up with an implementation that follows following steps -
Accept incoming image buffer & create sharp instance
Create another instance from this instance by setting quality to 100
Compare the size of original sharp instance and new sharp instance
If there is a difference in the size, update the quality and execute step 2 with updated quality
Return the quality once comparison in step 4 gives smallest difference
I tested this approach by using an image of known quality i.e. 50 (confirmed)
EDIT - I generated images with different quality values using Photoshop
However, the above logic returns the quality as 82 (expected is something close to 50)
Problem
So the problem is I am not able to figure out the quality of image.
It is fine if the above logic returns a closest value such as 49 or 51.
However the result is totally different.
Results
As per this logic, I get following results for a given quality -
Actual Quality 50 - Result 82
Actual Quality 60 - Result 90
Actual Quality 70 - Result 90
Actual Quality 80 - Result 94
Actual Quality 90 - Result 98
Actual Quality 100 - Result 98
Code
The following code snippet is used to calculate the quality.
I do understand that it is needs improvements for precise results.
But it should at least provide close values.
async function getJpegQuality(image: sharp.Sharp, min_quality: number, max_quality: number): Promise<number> {
if (Math.abs(max_quality - min_quality) <= 1) {
return max_quality;
}
const updated_image: sharp.Sharp = sharp(await image.jpeg({ quality: max_quality }).toBuffer());
const [metadata, updated_metadata]: sharp.Metadata[] = await Promise.all([image.metadata(), updated_image.metadata()]);
// update quality as per size comparison
if (metadata.size > updated_metadata.size) {
const temp_max = Math.round(max_quality);
max_quality = Math.round((max_quality * 2) - min_quality);
min_quality = Math.round(temp_max);
} else {
max_quality = Math.round((min_quality + max_quality) / 2);
min_quality = Math.round((min_quality + max_quality) / 2);
}
// recursion
return await getJpegQuality(image, min_quality, max_quality);
}
Usage
const image: sharp.Sharp = sharp(file.originalImage.buffer);
const quality = await getJpegQuality(image, 1, 100);
console.log(quality);
Thanks!

Redis Node - Querying a list of 250k items of ~15 bytes takes at least 10 seconds

I'd like to query a whole list of 250k items of ~15 bytes each.
Each item (some coordinates) is a 15 bytes string like that xxxxxx_xxxxxx_xxxxxx.
I'm storing them using this function :
function setLocation({id, lat, lng}) {
const str = `${id}_${lat}_${lng}`
client.lpush('locations', str, (err, status) => {
console.log('pushed:', status)
})
}
Using nodejs, doing a lrange('locations', 0, -1) takes between 10 seconds and 15 seconds.
Slowlog redis lab:
I tried to use sets, same results.
According to this post
This shouldn't take more than a few milliseconds.
What am I doing wrong here ?
Update:
I'm using an instance on Redis lab

TFRecord format for multiple instances of the same or different classes on one training image

I am trying to train a Faster R-CNN on grocery dataset detection using the new Object Detection API, but I do not quite understand the process of creating a TFRecord file for that. I am aware of the Oxford and VOC dataset examples and the scripts to create TFRecord files, and they work fine if there is only one object in a training image , which is what I see in all of the official examples and github's projects. I have images where more than 20 objects are defined and By the way objects have different classes. I don't want to iterate 20+ times per one image and create 20 almost the same tf_examples where only img_encoded that will be 20+ will take all my space.
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
'image/encoded': dataset_util.bytes_feature(encoded_image_data),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
I believe that the answer for my question in the field that during creating tf_records xmin, xmax, ymin, ymax, classes_text, and classes should all be lists with one value per bounding box, so I can add different objects and parameters into these lists per one image.
Maybe someone has experience and can help with advice. The way I've described is going to work or not and if not is there any ways to create tf_recrds for multiple objects in one image in delicate and simple way?
I just put here some features(not all of them) for creating tfrecords the way I think has to work because of what is said in comments(List of ... (1 per box)) in link I attached. Hope idea is clean from the attached json.
To clean some situation : xmin for example has 4 different normalized xmins [0.4056372549019608, 0.47794117647058826, 0.4840686274509804, 0.4877450980392157] for 4 different bboxes in attached feature example . Don't forget that lists were converted using dataset_util.float_list_feature method into serializable json format. c
features {
feature {
key: "image/filename"
value {
bytes_list {
value: "C4_P06_N1_S4_1.JPG"
}
}
}
feature {
key: "image/format"
value {
bytes_list {
value: "jpeg"
}
}
}
feature {
key: "image/height"
value {
int64_list {
value: 2112
}
}
}
feature {
key: "image/key/sha256"
value {
bytes_list {
value: "4e0b458e4537f87d72878af4201c55b0555f10a0e90decbd397fd60476e6e973"
}
}
}
feature {
key: "image/object/bbox/xmax"
value {
float_list {
value: 0.43323863636363635
value: 0.4403409090909091
value: 0.46448863636363635
value: 0.5085227272727273
}
}
}
feature {
key: "image/object/bbox/xmin"
value {
float_list {
value: 0.3565340909090909
value: 0.36363636363636365
value: 0.39204545454545453
value: 0.4318181818181818
}
}
}
feature {
key: "image/object/bbox/ymax"
value {
float_list {
value: 0.9943181818181818
value: 0.7708333333333334
value: 0.20265151515151514
value: 0.9943181818181818
}
}
}
feature {
key: "image/object/bbox/ymin"
value {
float_list {
value: 0.8712121212121212
value: 0.6174242424242424
value: 0.06818181818181818
value: 0.8712121212121212
}
}
}
feature {
key: "image/object/class/label"
value {
int64_list {
value: 1
value: 0
value: 3
value: 0
}
}
}
}
I kinda did what I thought have to help but I got these numbers during training and that's abnormal.
INFO:tensorflow:global step 204: loss = 1.4067 (1.177 sec/step)
INFO:tensorflow:global step 205: loss = 1.0570 (1.684 sec/step)
INFO:tensorflow:global step 206: loss = 1.0229 (0.916 sec/step)
INFO:tensorflow:global step 207: loss = 80484784668672.0000 (0.587 sec/step)
INFO:tensorflow:global step 208: loss = 981436265922560.0000 (0.560 sec/step)
INFO:tensorflow:global step 209: loss = 303916113723392.0000 (0.539 sec/step)
INFO:tensorflow:global step 210: loss = 4743170218786816.0000 (0.613 sec/step)
INFO:tensorflow:global step 211: loss = 2933532187951104.0000 (0.518 sec/step)
INFO:tensorflow:global step 212: loss = 1.8134 (1.513 sec/step)
INFO:tensorflow:global step 213: loss = 73507901414572032.0000 (0.553 sec/step)
INFO:tensorflow:global step 214: loss = 650799901688463360.0000 (0.622 sec/step)
P.S additional information: for normal view where 1 image has 1 object class from this dataset all works fine.
You are correct in that xmin, xmax, ymin, ymax, classes_text, and classes are all lists with one value per bounding box. There is no need to duplicate the image for each bounding box; it would indeed take up a lot of disk space. As #gautam-mistry pointed out, the records are streamed into tensorflow; as long as each image will fit into RAM you should be okay, even if you duplicated the images (so long as you have the disk space).
TFRecords file represents a sequence of (binary) strings. The format is not random access, so it is suitable for streaming large amounts of data but not suitable if fast sharding or other non-sequential access is desired.
tf.python_io.TFRecordWritertf.python_io.tf_record_iteratortf.python_io.TFRecordCompressionTypetf.python_io.TFRecordOptions
I found what was the problem --> I had a mistake in my protobuf class file. Different type of classes related to the one number of class. For example:
item {
id: 1
name: 'raccoon'
}
item {
id: 1
name: 'lion'
}
And so on, but because I had around 50 classes only in some step loss is went tremendously hight. Maybe it'll help someone, be cautious with proto txt :)

MIRC anti-flood script

I'm looking for a way to kick users for flood.
The idea is:
on [lessthanhalfop]:text:*:#chan: {
If [timer$nick] !== 0 {
set %kickstate$nick +1
if %kickstate$nick < 4 {
kick $nick #chan [reason:flood]
echo > kickedlist.txt
delete [timer$nick]
delete [timer$nick]
makenew timer with 4 seconds
}
Set timer$nick 5seconds
}
Can anyone help me with this so that it is workable with unique timers for each $nick so that they do not overide for each user.
All i want it to do is kick people that flood the chat by typing within a particular time period(in this case 2 secons). Can anyone help me solve this?
I'm using mIRC, but the channel is in the swiftirc network, if anyone wants to know.
Solution:
A. We are setting a variable and incremental (with a live span of 2 seconds) with the following format "cTxtFlood.USER-ADDRESS". this allow us to track every new flooder at our system + it will clean the people who talked BUT not flooders.
B. We are checking if the variable counter exceed X lines (5 in the example)
C. If flooder, then we are banning and kicking the user with a ban span of 300 seconds.
Little info:
chan - the channel you want to protect
#* - only if I got op at the channel
-u2 = unset variable in 2 seconds
ban -ku300 = kick and ban for 300 seconds
Complete Code (wasn't tested)
on #*:text:*:#chan: {
inc -u2 % [ $+ [ $+(cTxtFlood.,$wildsite) ] ]
if (% [ $+ [ $+(cTxtFlood.,$wildsite) ] ] == 5) {
echo -ag ban -ku300 # $nick 2 Channel Flood Protection (5 lines at 2 sec's)
}
}

Resources