How to change the thresholds for failure and warning on SAP S/4HANA Cloud SDK Pipeline? - sap-cloud-sdk

We are using SAP S/4HANA Cloud SDK pipeline in our project and have below configuration in place for JMeter tests. However I would like to change the thresholds for failure and warning. How can I customize these values?
checkJMeter:
options: ''
testPlan: './performance-tests/JMeter/*'
dockerImage: 'famiko/jmeter-base'

Use below configuration for customizing thresholds.
checkJMeter:
options: ''
testPlan: './performance-tests/JMeter/*'
dockerImage: 'famiko/jmeter-base'
failThreshold : 80 # configurable
unstableThreshold: 70 # Should always be less than failThreshold
Default values for error and warning are 100 and 90 respectively.

Related

Can't change RDS Postgres major version from the AWS console?

I have an RDS Postgres database, currently sitting at version 14.3.
I want to schedule a major version upgrade to 14.5 to happen during the maintenance window.
I want to do this manually via the console, because last time I did a major version of the Postgres version by changing the CDK definition, the deploy command applied the DB version change immediately, resulting in a few minutes downtime of the database (manifesting as connection errors in the application connecting to the database).
When I go into the AWS RDS console, do a "modify" action and select the "DB Engine Version" - it only shows one option, which is the current DB version: "14.3".
According to the RDS doco 14.4, 14.5. and 14.6 are all valid upgrade targets: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion
Also, when I do aws rds --profile raido-prod describe-db-engine-versions --engine postgres --engine-version 14.3 it shows those versions in the ValidUpgradeTarget collection.
Using CDK version 2.63.0
Database CDK code:
// starting with 14.3 to test the manual upgrade process
const engineVersion14_3 = PostgresEngineVersion.VER_14_3;
const dbParameterGroup14_3 = new ParameterGroup(this, 'Postgres_14_3', {
description: "RaidoDB postgres " + engineVersion14_3.postgresFullVersion,
engine: DatabaseInstanceEngine.postgres({
version: engineVersion14_3,
}),
parameters: {
// none we need right now
},
});
/* Note that even after this stack has been deployed, this param group
will not be created, I guess it will only be created when it's attached
to an instance? */
const engineVersion14_5 = PostgresEngineVersion.VER_14_5;
// CDK strips out underbars from name, hoping periods will remain
const dbParameterGroup14_5 = new ParameterGroup(this, 'Postgres.14.5.', {
description: "RaidoDB postgres " + engineVersion14_3.postgresFullVersion,
engine: DatabaseInstanceEngine.postgres({
version: engineVersion14_5,
}),
parameters: {
// none we need right now
},
});
this.pgInstance = new DatabaseInstance(this, 'DbInstance', {
databaseName: this.dbName,
instanceIdentifier: this.dbName,
credentials: Credentials.fromSecret(
this.raidoDbSecret,
this.userName,
),
vpc: props.vpc,
vpcSubnets: {
subnetGroupName: props.subnetGroupName,
},
publiclyAccessible: false,
subnetGroup: dbSubnetGroup,
multiAz: false,
availabilityZone: undefined,
securityGroups: [props.securityGroup],
/* Should we size a bigger instance for prod?
Plan is to wait until its needed - there will be some downtime for
changing these. There's also the "auto" thing. */
allocatedStorage: 20,
instanceType: InstanceType.of(InstanceClass.T4G, InstanceSize.SMALL),
engine: DatabaseInstanceEngine.postgres({
version: engineVersion14_3,
}),
parameterGroup: dbParameterGroup14_3,
/* Not sure what this does, changing it to true didn't allow me to change
the version in the console. */
allowMajorVersionUpgrade: true,
/* 14.3.x -> 14.3.x+1 will happen automatically in maintenance window,
with potential downtime. */
autoMinorVersionUpgrade: true,
// longer in prod
backupRetention: Duration.days(30),
/* This enables DB termination protection.
When stack is destroyed, db will be detached from stack but not deleted.
*/
removalPolicy: RemovalPolicy.RETAIN,
// explain and document the threat model before changing this
storageEncrypted: false,
/* "Enhanced monitoring"."
I turned this on while trying to figure out how to change the DB version.
I still don't think we should have it enabled until we know how/when we'll
use it - because it costs money in CloudWatch Logging, Metric fees and
performance (execution and logging of the metrics from the DB server).
*/
monitoringInterval: Duration.minutes(1),
monitoringRole: props.monitoringRole,
/* Useful for identifying expensive queries and missing indexes.
Retention default of 7 days is fine. */
enablePerformanceInsights: true,
// UTC
preferredBackupWindow: '11:00-11:30',
preferredMaintenanceWindow: 'Sun:12:00-Sun:13:00',
});
So, the question: What do I need to do in order to be able to schedule the DB version upgrade in the maintenance window?
I made a lot of changes during the day trying to diagnose the issue before I posted this question, thinking I must be doing something wrong.
When I came back to work the next day, the modify screen DB Engine Version field contained the upgrade options I was originally expecting.
Below is my documentation of the issue (unfortunately, our CDK repo is not public):
Carried out by STO, CDK version was 2.63.0.
This page documents my attempt to manually schedule the DB version upgrade
using the console for application during the maintenance window.
We need to figure out how to do this since the DB upgrade process results in a
few minutes of downtime, so we'd prefer to avoid doing it in-hours.
It's preferred that we figure out how to schedule the upgrade - if the choice
comes down to asking team members to work outside or hours or accept
downtime - we will generally choose to have the downtime during business hours.
Note that that Postgres instance create time is:
Sat Jan 28 2023 10:34:32 GMT+1000
Action plan
in the RDS console
Modify DB instance: raido
DB engine version: change 14.3 to 14.5
save and select to "schedule upgrade for maintenance window" (as
opposed to "apply immediately")
Actions taken
2023-02-02
When I tried to just go in and change it manually in the AWS console, the only
option presented on the "modify" screen was 14.3 - there was nothing to
change the version to.
I tried creating a 14.5 param group in CDK, but it just ignored me, didn't
create the param group I'm guessing because it's expected to actually be
attached to a DB.
Tried copying and creating a new param group to change the version, but there's
no "version" param in the param group.
Tried manually create a db of version 14.5 "sto-manual-14-5", but after the
DB was reported successfully created (as 14.5) - "14.3" is still the only
option in the "modify" screen for raido-db.
Tried creating a t3.micro in case t4g ws the problem - no good.
Tried disabling minor version auto-upgrade - no good.
Note that 14.6 is the current most recent version, both manually created 14.3
and 14.5 databases presented no other versions to upgrade to - so this problem
doesn't seem to be related to the CDK.
List available upgrades: aws rds --profile raido-prod describe-db-engine-versions --engine postgres --engine-version 14.3
Shows 14.4, 14.5 and 14.6 as valid target versions.
This page also shows the the versions should be able to be upgraded to, as at
2023-02-02: https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion
After all this, I noticed that we had the instance declared as
allowMajorVersionUpgrade: false, so I changed that to true and deployed,
but still can't select any other versions.
Also tried aws rds --profile raido-prod describe-pending-maintenance-actions
but it showed no pending actions.
I found this SO answer talking about RDS problems with T3 instances (note, I
have previously upgraded major versions of T3 RDS postgres instances):
https://stackoverflow.com/a/69295017/924597
On the manually created t3.micro instance, I tried upgrading the instance size
to a standard large instance. Didn't work.
Found this SO answer talking about the problem being related to having
outstanding "recommended actions" entries:
https://stackoverflow.com/a/75236812/924597
We did have an entry talking about "enhanced monitoring".
Tried enabling "enhanced monitoring" via the CDK, because it was listed as
a "recommended action" on the RDS console.
After deploying the CDK stack, the console showed the enhanced monitoring was
enabled, but the "recommended action" to enable it was still listed.
At this point, the console still showed 14.3 as the only option in the list on
the modify screen.
Posted to StackOverflow: Can't change RDS Postgres major version from the AWS console?
Posted to AWS repost: https://repost.aws/questions/QU4zuJeb9OShGfVISX6_Kx4w/
Stopped work for the day.
2023-02-03
In the morning, the RDS console no longer shows the "recommended action" to
enabled enhanced monitoring.
The modify screen now shows "14.3, 14.4, 14.5 and 14.6" as options for the
DB Engine Verison (as expected and orginally desired).
Given the number of changes I tried above, I'm not sure what, if any of them
may have caused the console to start displaying the correct options.
It may have been a temporary issue with RDS, or AWS support may have seen my
question on the AWS repost forum and done something to the account.
Note that I did not raise a formal support request via the AWS console.
I wanted to try and confirm if the enhanced monitoring was the cause of the
issue, so I changed the CDK back (there is no "enhance monitoring" flag, I
just commented out the code that set the monitoring role and interval).
After deploying the CDK stack, there was no change to the RDS instance -
enhanced monitoring was still enabled.
I did a manual modify via the RDS console to disable enhanced monitoring.
The change did apply and was visible in the consle, but the "recommended
actions" list did not have any issues.
At this point I had to attend a bunch of meetings, lunch, etc.
When I came back after lunch, the "recommended actions" list now shows an
"enhanced monitoring" entry.
But the modify console page still shows the 14.3 - 14.6 DB engine options, so
I don't think "enhanced monitoring" was the cause of the problem.
I scheduled the major version upgrade (14.3 -> 14.5, because 14.6 is not yet
supported by the CDK) for the next maintenance window.
Analysis
My guess is that the issue was caused by having allowMajorVersionUpgrade set
to false. I think changing it to true is what caused the other
version options to eventually show up on the modify page. I think the
reason the options didn't show up on the modify page after depoloying the
CDK change is down to waiting for an eventual consistency conflict to converge.

Dast Authentication Issues in Gitlab CICD on Angular Website

I'm having an issue with the built in gitlab dast scanning and authentication in the pipeline.
The application that is attempting to be scanned is an angular app using the aspnetzero framework.
In gitlab the cicd file uses the dast UI configuration to setup the job and in the cicd yml file the job spec looks like:
# Include the DAST template
include:
- template: DAST.gitlab-ci.yml
# Your selected site and scanner profiles:
dast:
stage: dast
dast_configuration:
site_profile: "auth"
scanner_profile: "default"
In the site profile the proper data is setup for authentication and then running the dast scanning job, I get an error in the logs like
2022-07-12T22:00:16.000 INF NAVDB Load URL added to crawl graph
2022-07-12T22:00:16.000 INF AUTH Attempting to authenticate
2022-07-12T22:00:16.000 INF AUTH Loading login page LoginURL=https://example.com/account
2022-07-12T22:00:23.000 WRN BROWS response body exceeds allowed size allowed_size_bytes=10000000 request_id=interception-job-4.0 response_size_bytes=11100508 url=https://example.com/main.f3808aecbe8d4efb.js
2022-07-12T22:00:38.000 WRN CONTA request failed, attempting to continue scan error=net::ERR_BLOCKED_BY_RESPONSE index=0 requestID=176.5 url=https://example.com/main.f3808aecbe8d4efb.js
2022-07-12T22:00:39.000 INF AUTH Writing authentication report path=/zap/wrk/gl-dast-debug-auth-report.html
2022-07-12T22:00:39.000 INF AUTH skipping writing of JSON cookie report as there are no cookies to write
2022-07-12T22:00:40.000 FTL MAIN Authentication failed: failed to load login page: expected to find a single element for selector css:#manual_login to follow path to login form, found 0
2022-07-12 22:00:40,059 Browserker completed with exit code 1
2022-07-12 22:00:40,060 BrowserkerError: Failure while running Browserker 1.Exiting scan
sion.ExtensionLoader - Initializing Provides the foundation for concrete message types (for example, HTTP, WebSockets) expose fuzzer implementations.
[zap_server] 13499 [ZAP-daemon] INFO org.parosproxy.paros.extension.ExtensionLoader - Initializing Allows to fuzz HTTP messages.
It seems like container that is doing the dast scanning can't properly load the angular javascript file since it exceeds the allowed response size, and the actual login form does not load. Is there a way to increase the allowed size for the request so that we can have the login form properly load.
I've tried various options like setting the stability timeout variables, and even increasing the memory for the ZAP process (DAST_ZAP_CLI_OPTIONS: '-Xmx3072m' ). but am still getting the same result in that the login form isn't loading, most likely because the javascript isn't loading properly.
The fix looks like to be a gitlab/dast cicd variable issue that isn't in any of the current documentation that I could find.
In order to view all the options or parameters available I update the cicd file with the following:
include:
template: DAST.gitlab-ci.yml
dast:
script:
- /analyze --help
so I could see the options available. From this I was able to find DAST_BROWSER_MAX_RESPONSE_SIZE_MB variable to use. Setting that variable fixed my issue

alertmanager filter by tag (timescale backend)

I am using alertmanager configured to read from a timescale db shared with other Prometheus/alertmanager systems.
I would like to set/check alerts only for services including a specific tag, therefore wondering how could I configure prometheus to apply only for specific tags?
This is what currently I am using:
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
remote_write:
- url: https://promscale.host:9201/write
remote_read:
- url: https://promscale.host:9201/read
read_recent: true
...
I found there is an option alert_relabel_configs but is unclear for me the usage of it.
Any ideas?
FYI, alert_relabel_configs are used for alert relabeling to alerts before they are sent to the Alertmanager
To use alert_relabel_configs below is the example to add a new tag on matching the relabel config set:
alert_relabel_configs:
- source_labels: [ log_level ]
regex: warn
target_label: severity
replacement: warn
Note: The alerts are only changed when sent to alertmanager. They are
not changed in the Prometheus UI.
To test the relabel config online you can use https://relabeler.promlabs.com/
If you are using Prometheus Operator configuring alert relabeling rules should be done in additionalAlertRelabelConfigs of PrometheusSpec, more details: https://github.com/prometheus-operator/prometheus-operator/issues/1805

Why ClientRequest.* and storage.* metrics are not reported with builtin ConsoleReporter in DSE Cassandra?

I am trying to get the metrics from DSE Cassandra(dse: 5.1.0, Cassandra :3.10.0.1652) using builtin reporters like ConsoleReporter. I could able to get all the metrics except the metrics under ClientRequest.* and Storage.* even though I have reads/writes to this cluster . The only metric under ClientRequest.* group is org.apache.cassandra.metrics.ClientRequest.ViewPendingMutations.ViewWrite
I tried with different reporter config, but no luck and I didn't find any JIRA associated to this as well. The same behavior with StatsD Reporter as well.
Here is the reporter config with wildcard whitelist
console:
-
outfile: '/tmp/metrics.out'
period: 10
timeunit: 'SECONDS'
predicate:
color: "white"
useQualifiedName: true
patterns:
- ".*"
Both the ClientRequest and Storage metrics are critical for me . Is any body has any pointers why I am not getting these metrics? I appreciate any insights on resolving this issue.
It seems some issue with DSE version of Cassandra.Not sure something broken in latest version of DSE/Cassandra . I tested with Open Source Cassandra: 3.9.0 and it seems this works. I could able to get all the metrics under ClientRequest.* and Storage.* with the Open Source Cassandra 3.9.0.

Azure Functions timeout for Consumption plan

Is there a way to change the current 5 minutes timeout limit for Azure Functions running under the Consumption plan ?
For some data analytics computations 5 minutes is not enough time.
The alternative of using webjobs doesn't allow parallel execution of the function.
(Other answer is a bit confusing, so writing instead of editing a lot)
Azure Functions can now run up to 10 minutes using the consumption plan by adding the functionTimeout setting to your host.json file:
In a serverless Consumption plan, the valid range is from 1 second to 10 minutes, and the default value is 5 minutes.
In both Premium and Dedicated (App Service) plans, there is no overall limit, and the default value is 30 minutes. A value of -1 indicates unbounded execution, but keeping a fixed upper bound is recommended
Source: https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#functiontimeout
File: host.json
// Value indicating the timeout duration for all functions.
// Set functionTimeout to 10 minutes
{
"functionTimeout": "00:10:00"
}
Source:
https://buildazure.com/2017/08/17/azure-functions-extend-execution-timeout-past-5-minutes/
https://github.com/Azure/azure-webjobs-sdk-script/wiki/host.json
Azure Functions can now run up to 10 minutes using the consumption plan:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#functiontimeout
Here the complete host.json, recarding to the Microsoft Docs:
Don't forget to restart the Function to reload the Configuration!
{
"version":"2.0",
"managedDependency":{
"Enabled":true
},
"extensionBundle":{
"id":"Microsoft.Azure.Functions.ExtensionBundle",
"version":"[2.*, 3.0.0)"
},
"functionTimeout": "00:05:00"
}
Another trick is, only to define the required Az-Modules in requirements.psd1 and not all of them:
Bad:
# This file enables modules to be automatically managed by the Functions service.
# See https://aka.ms/functionsmanageddependency for additional information.
#
#{
# For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
# To use the Az module in your function app, please uncomment the line below.
'Az' = '6.*'
}
Good:
# This file enables modules to be automatically managed by the Functions service.
# See https://aka.ms/functionsmanageddependency for additional information.
#
#{
# For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
# To use the Az module in your function app, please uncomment the line below.
# 'Az' = '6.*'
'Az.Accounts' = '2.*'
'Az.Resources' = '4.*'
'Az.Monitor' = '2.*'
}
You can change the plan to premium but you need to create a new function because you can't change it once it's created already. premium plan has no overall limit.
Here is the official documentation.

Resources