Google reports Dmarc Success Rate is 0.6& - gmail

Back in July we have a breach. An employee loaded a program that had an SpamBot and naturally our reputation in the world crashed. We fixed it and reported the correction to everyone we could and things went back to normal except Google. Even Google Office (the paid email platform) won't accept our mails even after the client whitelists us.
Postmaster tools reports
Date DKIM success rate SPF success rate DMARC success rate
Aug 4, 2022 100.0% 100.0% 0.6%
Spam rate is zero
IP Reputation is ONLY reported for the 24 hour period we were spamming.
Domain Reputation went to "bad" on that day as has remained there in spite of everything
Spam Feedback loop is zero
Encryption is 100%
Delivery Error rate shows "100%" as of the day of the hack and NO DATA since.
I'm wondering if Google is no longer collecting data on us? Every period LESS that "the last 60 days" says "no data, check back later"

Related

Segmentation - Starting / Stopping Recording of live stats

Does anyone know if there is a problem with the recording of live stats? Does it turns itself off after a pre-determined amount of time? It often seems it turns itself off, randomly...
Also, does this functionality affect the audience stats, because on this account specifically, yesterday I had a number of users, and today the stats seems to have reset and started counting back from zero?
As you can see the segmentation was not modified since the 19th of April 2022, but the actual stats have changed and now have very low numbers again...
Any idea?
Segmentation Screenshot

Spamassassin - score by time of day sent

Is there a way to assign a score for mail sent between certain hours. I find a lot of spam is sent in the middle of the night so would like to give anything between say 2am and 5am a score of 2 or 3.
You can use SpamAssassin to penalize mail received within certain hours, but it's messy.
Before we start, verify that SA's primary defenses are properly set up:
DNS Blocklists, including DNSBLs & URIBLs, are a necessity; set them up before all else
Bayes in SpamAssassin is another must-have, though it requires training
Use Razor for fuzzy matching (see also Installing Razor) if the license works for you.
If that's insufficient, then you can address the sort of issue you're keying on. Try:
The RelayCountry plugin to penalize countries you never converse with
The TextCat plugin to discriminate against the languages you never converse in
If all that doesn't help enough, then (and only then) you can consider what you proposed. Read on.
Don't forget about time zones. You can't use the Date header for this reason. This type of rule is not safe for deployments that have conversations that span too many time zones, and you must ensure the MX record servers are all consistent and on the same time zone. Be aware that daylight savings (aka “summer time”) can be annoying here.
Identify a relay that your receiving infrastructure adds but is added before SpamAssassin runs (so SA can see it). This will manifest as a Received header near the top of your email. Again, make sure it's actually visible to SpamAssassin; the Received header added by your IMAP server will not be visible.
It is possible that you have SpamAssassin configured to run before any internal relay is stamped into the message. If this is the case, do not proceed further as you cannot reliably determine the local time.
Okay, all caveats aside, here's an example Received header:
Received: from external-host.example.com
(external-host.example.com [198.51.100.25])
by mx.mydomain with ESMTPS id ABC123DEF456;
Fri, 13 Mar 2020 12:34:56 -0400 (EDT)
This must be a header one of your systems adds or else it could have a different time zone, clock skew, or even a forged timestamp.
Match that in a rule that clearly denotes you as the author (by convention, start it with your initials):
header CL_RCVD_WEE_HOURS Received =~ /\sby\smx\.mydomain\swith\s[^:]{9,64}+:(?<=[0 ][2-4]:)[0-9:]{5}\s-0[45]00\s/
describe CL_RCVD_WEE_HOURS Received by our mx.mydomain relay between 2a and 5a EST/EDT
score CL_RCVD_WEE_HOURS 0.500
A walk through that regex (see also an interactive explanation at Regex101):
First, you need to verify that it's your relay, matched by name: by mx.mydomain with
Then, skip ahead 9-64 non-colon characters (quickly, with no backtracking, thus the + sign). You'll need to verify your server doesn't have any colons here
The real meat is in a look-behind (since we actually skipped over the hour for speed purposes), which seeks the leading zero (or else a space) and then the 2, 3, or 4 (not 5 since we don't want to match a time like 05:59:59)
Finally, there's a sanity check to ensure we're looking at the right time zone. I assumed you're in the US on the east coast, which is -0400 or -0500 depending on whether daylight savings is in effect
So you'll need to change the server name, review whether the colon trick works with your relay, and possibly adjust the time zone regex.
I also gave this a lower score than you desired. Start low and slowly raise it as needed. 3.000 is a really high value.

Chrome Lighthouse comma and full stop in performance option

I have a few questions regarding the report of lighthouse (see screenshot below)
The first is a culture think: i assume the value 11.930 ms stands for 11 seconds and 930 ms. Is this the case?
The second is the Delayed Paint compared to the size. The third entry (7.22 KB) delays the paint by 3,101 ms the fourth entry delays the paint by 1,226 ms although the javascript file is more than three times the size 24.03 KB versus 7.22 KB. Does anybody know what might be the cause?
Screenshot of Lighthouse
This is an extract of a lighthouse report. In the screenshot of google-chrome-lighthouse you can see that a few metrics are written with a comma 11,222 ms and others with a full stop 7.410 ms
Thank you for discovering quite a bug! An issue has been filed in the Lighthouse GitHub repo.
To explain what's likely going on, it looks like this report was generated with the CLI (or at least a locale that is different from the one that it is being displayed in). Some numbers (such as the ones in the table) are converted to strings ahead of time while others are converted at display time in the browser. The browser numbers are respecting your OS/user-selected locale while the pre-stringified numbers are not.
To answer your questions...
Yes, the value it's reporting is 11930 milliseconds or 11 seconds and 930 milliseconds (11,930 ms en-US or 11.930 ms de-DE).
The delayed paint metric is reporting to you how many milliseconds after the load started the asset finished loading. There are multiple factors that influence this number including when the asset was discovered by the browser, queuing time, server response time, network variability, and payload size. The small script that delayed paint longer likely had a lower priority or was added to your page later than the larger script was and thus was pushed out farther.

Instagram rate-limit header with no predictable value

According to documentation: https://www.instagram.com/developer/limits/
The rate-limit control works under a "time-sliding" window, the question is:
What's the frequency of increasing for the remaining calls HTTP header (x-ratelimit-remaining) seconds? minutes?, an hour?
Reading the docs. "5000/hr per token for Live apps" (our company App went Live already), I assumed a frequency limiter, being calculated each second or minute, but after several days trying different strategies the value doesnt seem to have any deductible behaviour.
Possible answers (depending how it is coded) could be:
(a sliding window like a frequency limiter)
it increases 1 credit each 720 ms (3600'(1hr) / 5000 (remaining calls)) without a request until reaching 5000, it decays to 0 otherwise.
If we do 1 req. at the correct frequency, we should never lose 5000 calls., So we could spend them strategically: dispersed, cluttered, traffic-adapted.
(a limited sink recharging each hour)
with 5000 remaining, it decays 1 credit per request -no matter the frequency-, after 1 hour passed since that 1st request: it goes back to 5000
it renews to 5000 each 1 hour counting since the token was used to do the 1st request.
it decays 1 credit per request, and it goes to 5000 in a time fixed hour, like at 12:00, 13:00, 14:00, 15:00...
I'm using jInstagram 1.1.7.
After a lot of testing....
I have some temporary conclusions...
Starting from 5000, if you fetch at uniform rate (720ms/req) you will reach 500 like at the minute 50, then instagram will begin to give you credit in portions lesser than 500. So at the minute 60 you'll have 150 remaining calls left, and instagram will give you another credit portion, generally reaching 500 avg. and going down again of course...
If you stop consuming, like 30 minutes aprox. You will acquire again 5000 credits.
Also they give you 5000 remaining calls, they seem to have counters indexed by IP, if you make the request from different IPs with the same credential, they'll act like ignoring the others.
Besides that, instagram have many errors keeping a consistent value for the x-ratelimit-remaining HTTP header they respond on every HTTP request.
It looks related to some overriding, and some kind of race between the servers replicating the last value.
Shame on you instagram, I spent a lot of time adapting my cool throttling algorithm to your buggy behaviour, assuming you had good engineering down there !
Please fix them so we can play fair with you instead of playing hide and seek, stealth tricks..

Bluetooth Ping Latency

I am currently working on a project involving a Lego Mindstorms kit. The brick is the NXT and I was curious about the bluetooth ping rates.
I ran a test of 100 pings on it and got some interesting results. The latencies seemed to fall into bands. I increased to 10,000 pings and it highlighted this trend even more clearly. Does anyone know what could cause this to happen?
In case it is relevant, the distance between the sender and receiver was about 3 metres.
Few reasons :
Buffering and internal timers to flush buffers can cause it.
Also depends on the ping intervals (i.e. time between subsequent pings), as the link might go to power save mode during inactivity and it will take a fine time to come back up.
Size of the ping packets
What bluetooth profile is used here ?

Resources