azure logic apps with azure cost report send mail - azure

I tried logic apps with sending a file over mail every time a blob is written in the container and it works fine using the synthax below .
Now my problem is that Azure cost report does create a directory that his the following structure :
/BlobContainer/BlobDirectoryFromAzureCostReportExport/DirectoryWithNameOfTheReport/
So far nothing fancy, we could just leave it that way , but my problem that the cost analysis export creates a directory with this format :
YYYYMM01-YYYYMM30
for instance , i have now :
https://ccppdhbstolog.blob.core.windows.net/ccpphbrgdailycostcont/ccpphbrgdailycostdir/ccpphbrgdailycostreport/20200901-20200930/ccpphbrgdailycostreport_30133934-c624-4020-96ac-cf36d9f2e6dc.csv
I can select logic Apps to monitor this path :
/ccppdhbstolog.blob.core.windows.net/ccpphbrgdailycostcont/ccpphbrgdailycostdir/ccpphbrgdailycostreport/20200901-20200930/
But next month what will happen ? ( I would need to reconfigure... ) or, maybe I can write this YYYYMMDD-YYYYMMDD for the coming month. But I do not now the synthax, nor if this is going to work if there 31 days or 28 (febuary ) days in the month ..
I need to send report automatically, any clue how I can make this work ?

Related

How do I filter a Job Estimates vs. Actuals report by customer: job name in QuickBooks Desktop SDK using QBFC?

I have modified some VB sample code to get most of what I need done using the QuickBooks SDK in an app launched from Excel using VBA. I am able to produce both a Time by Job Summary report and a Job Estimates vs. Actuals report, but for the latter I need to produce filtered copies of it for each customer:job reference number, and I'm not sure what the proper syntax is for this even after looking over the specific query in the API Reference for QB Desktop.
I'm fairly sure that this needs to be done during the request phase. Also, I'm using QBFC, so I have tried various combinations that seem logical, but still haven't received the desired output. If it helps, an example of what is needed for the filter would be like: 20-5050 Dan Barton Trucks. Below is my code for the request:
Set jobRQ = requestSet.AppendJobReportQueryRq
customerRef = "20-5050 Dan Barton Trucks"
With jobRQ
.JobReportType.SetValue ENJobReportType.jrtJobEstimatesVsActualsSummary
.ReportEntityFilter.ORReportEntityFilter.EntityTypeFilter.SetValue etfCustomer
' .ReportEntityFilter.ORReportEntityFilter.FullNameList.Add (customerRefID)
.ORReportPeriod.ReportPeriod.FromReportDate.SetValue dateFrom
.ORReportPeriod.ReportPeriod.ToReportDate.SetValue dateTo
.SummarizeColumnsBy.SetValue scbTotalOnly
.IncludeSubcolumns.SetValue True
.DisplayReport.SetValue True
End With
I have commented out the line that doesn't work.

How to reload a batch file with .qwv containing section access in QlikView Desktop?

I execute "C:\Program Files\QlikView\qv.exe" /R "D:\QlikViewAdm\qlik_file.qvw" to launch the loader. It works when there is no section access. However, when I run a file with section access QV asks me for login and password. I can link the access to Windows user credentials, but it only works on my terminal.
Is there a way to point QV to a local file with credentials?
You can do it by using NTNAME in the section Access like so:
Section Access;
LOAD * INLINE [
ACCESS, USERID, PASSWORD,NTNAME
ADMIN, ADMIN, ADMIN , *
ADMIN, , , yourWindowsUserID
];
you can get yourWindowsUserID by using the OsUser() function (in QlikView).
hope this will help
First thing to notice is that the user churning is following the exponential function like the nuclear decay. Hence, t-test is useless (it assumes normal distribution).
Second, it is time to look at counts of users with 0,1,2.. events.
Third, just use Wilcoxon test if you compare the same cohort of users before and after. Otherwise, use Mann–Whitney U test. It is less stable than the tests on normally distributed values. However, it does the job whenever you cannot promise normality.
Spoiler: yes, it worked out. Cohort 1 and 2 were nearly indistinguishable, but cohort 3 and 1 yielded a difference at p < 0.001.

How to push complex legacy logs into logstash?

I'd like to use ELK to analyze and visualize our GxP Logs, created by our stoneold LIMS system.
At least the system runs on SLES but the whole logging structure is some kind of a mess.
I try to give you an impression:
Main_Dir
| Log Dir
| Large number of sub dirs with a lot of files in them of which some may be of interest later
| Archive Dir
| [some dirs which I'm not interested in]
| gpYYMM <-- subdirs created automatically each month: YY = Year ; MM = Month
| gpDD.log <-- log file created automatically each day.
| [more dirs which I'm not interested in]
Important: Each medical examination, that I need to track, is completely logged in the gpDD.log file that represents the date of the order entry. The duration of the complete examination varies between minutes (if no material is available), several hours or days (e.g. 48h for a Covid-19 examination) or even several weeks for a microbiological sample. Example: All information about a Covid-19 sample, that reached us on December 30th is logged in ../gp2012/gp30.log even if the examination was made on January 4th and the validation / creation of report was finished on January 5th.
Could you please provide me some guidance of the right beat to use ( I guess either logbeat or filebeat) and how to implement the log transfer?
Logstash file input:
input {
file {
path => "/Main Dir/Archive Dir/gp*/gp*.log"
}
}
Filebeat input:
- type: log
paths:
- /Main Dir/Archive Dir/gp*/gp*.log
In both cases the path is possible, however if you need further processing of the lines, I would suggest using at least Logstash as a passthrough (using beats input if you do not want to install Logstash on the source itself, which can be understood)

Dialogflow console returning different result to when sent through the API (Where the wrong timezone is used)

I have a DialogFlow agent set up where one intent is used to schedule reminders. I can say something like:
"At 5pm remind me to go for a run"
And it returns the sentence to remind (in this case 'go for a run') as well as the time to set the reminder using #sys.date-time.
This works as intended, I am able to get the correct time because it just sends the time without a timezone attached.
When I use a command such as:
"In 15 minutes remind me to go for a run"
it sends the result as the time using the timezone, in this case, the incorrect one.
So right now, a result for the date-time using the API was:
2020-11-09T14:20:33+01:00
which is an hour more than it should be.
I have checked the DialogFlow agent's default time zone where it is set to:
(GMT0:00) Africa/Casablanca
Which I am fairly certain is the correct one for London time. However moving to a different timezone changes it and actually gives the correct one for the timezone (Just not my timezone)
Leaving me to wonder if this time zone is broken?
Regardless though, the Dialogflow console on the webpage returns the correct date-time but in a different format using 'startDateTime' and 'endDateTime', something that the agent does not do when sent using the API.
I have checked all configurations within the program and cannot find any evidence of any code giving a new timezone and in fact have tried to add the London timezone when a query is sent but this does not resolve the issue.
Does anyone have any advice on how to solve this?
EDIT:
After receiving a good suggestion from a user I am reminded of the most puzzling part of this issue. Chaning the timezone to GMT -1:00 vs 0:00 actually having a difference of two hours.
Around 1pm I queried it to get the time in 15 minutes.
When it was set to GMT-1:00 Atlantiv/Cape_Verde the time returned is:
2020-11-10T12:21:15-01:00
When it was set to GMT0:00 Africa/Casablanca the time returned is:
2020-11-10T14:22:07+01:00
Neither is the correct time and despite the timezone suggesting a 1 hour difference, it is actually 2 hours apart.
For London the correct timezone should be GMT -1, Casablanca Africa is GMT+1, I used this web page to determine this.
If you are in London is recommended to configure your agent to sue the correct time zone (GMT-1).

Azure adds timestamp at the beginning logs

I have a problem with the logs retrieving from my docker containers with Azure log analytics, all logs are retrieving well but Azure adds a date at the beginning of each line of the log, which means that an entry is created for each line and I can't analyze my logs correctly because they are divided...
For example on this image I have in the black rectangle an added date (by azure I think) and in the red rectangle the date appearing in my logs :
Also, if there is no date on a line of my logs, there is still an added date on all lines, even the empty ones
The problem is that azure cuts my log file line by line by adding a date on each line when I would like it to delimit with the dates already present in my logs files.
Do you have any solutions?
One of the solution I can think of is that, when you query the logs, you can use the replace() method to replace the redundant date(replace it with a empty string etc.). And you need to write the proper regular expression for your purpose.
A false query like below:
ContainerLog
| extend new_logEntry=replace(#'xxx', #'xxx', LogEntry)
Currently Azure Monitor for containers doesn’t support multi-line logging, but there are workarounds available. You can configure all the services to write in JSON format and then Docker/Moby will write them as a single line.
https://learn.microsoft.com/fr-fr/azure/azure-monitor/insights/container-insights-faq#how-do-i-enable-multi-line-logging

Resources