Google MCF API - Getting data after 10:00UTC+2 - cron

I made a python script that call several Google Ads API to get some data. Everything is working well when I develop it but when I launch it with a cron job on a server early in the morning I was surprise to didn't have any error in my monitoring but one of the call didn't get me any data.
I tried to see if it was from my code but when I launch it from my computer everything was OK. I though about the time I launch it (around 1:00) and maybe too early.
I just figured it out that if I launch my script before 10:00 I have 0 data and after that I get some ? I'm UTC+2 so I could find out it was an issue. I cannot find a topic about what time the datas are available when you try to get them from yesterday.
Any idea why this happen ?

Related

Netsuite scheduled script https post fail while other scheduled script run

In NetSuite, we created a scheduled script that will run every day to fetch the bank account information.
For that, we use the HTTPS post method.
After a few months of testing and different customers, we figured out that if we have a customer with no other running script, it is working perfectly but if the customer has too much script running, instead of waiting, we have
SSS_INVALID_HOST_CERT An untrusted, unsupported, or invalid
certificate was found for this host.
The only way is to trigger the script manually at a specific hour like 08:57 to avoid the script running every 15min, 30min, ...
Has someone already had this kind of issue?
Is there a trick to scheduling the script at a precise hour?

How to troubleshoot an management.azure.com rest api call

I´m invoking https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.DesktopVirtualization/hostPools?api-version=2021-07-12 and randomly fails, returning a "500 Internal Server Error". Is there any way to troubleshoot this obscure error? Any log I can check?
Regards
EDIT: I do not provide the source code because this is not a code issue. This restapi call have been working in prod perfectly for months and, two days ago, suddenly it started to fail returning a 500 error in (more or less) 40% of calls. I would like to check some log or whatever in order to know what is happening on behind.
Azure Virtual Desktop troubleshooting steps are documented here:
https://learn.microsoft.com/en-us/azure/virtual-desktop/troubleshoot-set-up-overview
Not really enough detail in the question to answer anything specific for your environment.

Using a fixed server ID with OOkla speedtest-cli

Some time ago, I set up a Linux task to run speedtest-cli every 30 minutes to figure out a network issue. The task used the "--server ID" argument to get the speed to the same server each time. I used it for a while then forgot about it. Today I go back to revisit this only to find out that the API seems to have changed. Now proving the --list argument does not print a list of hundreds of servers, but of only the few (~10) nearest you. In my case, the servers it reports seems to change at least daily. Requesting speedtest to any server ID not reported in the list gives a failure. Has anyone figured out a way to get a periodic speedtest to a fixed server using speedtest-cli or any other tool?
If you are still looking for a solution, here is my suggestion.
While this does not use speedtest-cli (which no longer is supported and you should look at Ookla SpeedTest command line client instead) I believe this is what you are looking for, I'm running this in a Debian VM but if you have access to a RPi and can dedicate to this task, you may want to check this out.
https://github.com/geerlingguy/internet-pi
You can modify the docker-compose to hard code the server ID of your choice. You can get this from the Ookla SpeedTest command line client.
You would need to run the command:
speedtest -L
Good Luck!

azure selenium script randomly fails

I have a selenium script that get triggered in an azure pipeline to test some web pages if they are working. The script get triggered every hour in an azure pipeline, but the weird thing is that this script randomly, at least twice a day, it fails because it doesn't find an element. I do believe that this might happen because the pipeline worker is not fast enough to load the pages.
So I was wondering, if there is a way how can I solve this issue as for now the script when it fails its returning a false positive and I would like to avoid this.
thank you so much for any help or advice you can offer
To wait until the page is fully loaded, you can check similar ticket for the details.
In addition, for azure devops pipeline, to make it's more stable, you can setup self-hosted agent for the selenium test.

Determining Website Crash Time on Linux Server

2.5 months ago, I was running a website on a Linux server to do a user study on 3 variations of a tool. All 3 variations ran on the same website. While I was conducting my user study, the website (i.e., process hosting the website) crashed. In my sleep-deprived state, I unfortunately did not record when the crash happened. However, I now need to know a) when the crash happened, and b) for how long the website was down until I brought it back up. I only have a rough timeframe for when the crash happened and for long it was down, but I need to pinpoint this information as precisely as possible to do some time-on-task analyses with my user study data.
The server runs Linux 16.04.4 LTS (GNU/Linux 4.4.0-165-generic x86_64) and has been minimally set up to run our website. As such, it is unlikely that any utilities aside from those that came with the OS have been installed. Similarly, no additional setup has likely been done. For example, I tried looking at a history of commands used in hopes that HISTTIMEFORMAT was previously set so that I could see timestamps. This ended up not being the case; while I can now see timestamps for commands, setting HISTTIMEFORMAT is not retroactive, meaning I can't get accurate timestamps for the commands I ran 2.5 months ago. That all being said, if you have an idea that you think might work, I'm willing to try (as long as it doesn't break our server)!
It is also worth mentioning that I currently do not know if it's possible to see a remote desktop or something of the like; I've been just ssh'ing in and use the terminal to interact with the server.
I've been bouncing ideas off with friends and colleagues, and we all feel that there must be SOMETHING we could use to pinpoint when the server went down (e.g., network activity logs showing spikes around the time that the user study began as well as when the website was revived, a log of previous/no longer running processes, etc.). Unfortunately, none of us know about Linux logs or commands to really dig deep into this very specific issue.
In summary:
I need a timestamp for either when the website crashed or when it was revived. It would be nice to have both (or otherwise determine for how long the website was down for), but this is not completely necessary
I'm guessing only a "native" Linux command will be useful since nothing new/special has been installed on our server. Otherwise, any additional command/tool/utility will have to be retroactive.
It may or may not be possible to get a remote desktop working with the server (e.g., to use some tool that has a GUI you interact with to help get some information)
Myself and my colleagues have that sense of "there must be SOMETHING we could use" between various logs or system information, such at network activity, process start times, etc., but none of us know enough about Linux to do deep digging without some help
Any ideas for what I can try to help figure out at least when the website crashed (if not also for how long it was down)?
A friend of mine pointed me to the journalctl command, which apparently maintains timestamps of past commands separately from HISTTIMEFORMAT and keeps logs that for me went as far back as October 7. It contained enough information for me to determine both when I revived my Node js server as well as when my Node js server initially went down

Resources