Tenant specific Resilience Configuration - sap-cloud-sdk

We want to provide options to customers to configure resilence like for circuit breaker configuration. Kindly let us know is there a way we can provide tenant-specific configuration in cloud SDK.

We have already tried the multi-tenant approach for achieving resilience. But for our scenario we are use resilient approach even for customer systems (op systems) where the customers are asking to provide flexibility to decide parameters like slowCallRate etc according to their landscape. We want to know if we need to externalise the configuration for CircuitBreaker to customers so that they can alter some values as per their setup, how can we do it?
Thanks
Sagar

Let's assume you want to call the SAP BusinessPartner service (OData V2) for the destination as configured in your CloudFoundry Destination Service with the name "MyDestination".
Then your code would look like this with SAP Cloud SDK:
HttpDestination destination = DestinationAccessor.getDestination("MyDestination").asHttp();
BusinessPartnerService service = new DefaultBusinessPartnerService();
List<BusinessPartner> items = service
.getAllBusinessPartner()
.top(10)
.executeRequest(destination);
If you want to apply resilience features to the OData request, then it would look like this:
HttpDestination destination = DestinationAccessor.getDestination("MyDestination").asHttp();
BusinessPartnerService service = new DefaultBusinessPartnerService();
ResilienceConfiguration configuration = ResilienceConfiguration.of("ten-businesspartners");
List<BusinessPartner> items = ResilienceDecorator.executeSupplier(
() -> service.getAllBusinessPartner().top(10).executeRequest(destination),
configuration);
You can customize the instance of ResilienceConfiguration to your needs at runtime, e.g. thresholds for the circuit breaker, like this (with default values):
ResilienceConfiguration.of("ten-businesspartners")
.circuitBreakerConfiguration(ResilienceConfiguration.CircuitBreakerConfiguration.of()
.waitDuration(Duration.ofSeconds(10))
.closedBufferSize(10)
.failureRateThreshold(50)
.halfOpenBufferSize(5)
);

Related

Assign Application Insights cloud_RoleName to Windows Service running w/ OWIN

I have an application built from a series of web servers and microservices, perhaps 12 in all. I would like to monitor and, importantly, map this suite of services in Applications Insights. Some of the services are built with Dot Net framework 4.6 and deployed as Windows services using OWIN to receive and respond to requests.
In order to get the instrumentation working with OWIN I'm using the ApplicationInsights.OwinExtensions package. I'm using a single instrumentation key across all my services.
When I look at my Applications Insights Application Map, it appears that all the services that I've instrumented are grouped into a single "application", with a few "links" to outside dependencies. I do not seem to be able to produce the "Composite Application Map" the existence of which is suggested here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-app-map.
I'm assuming that this is because I have not set a different "RoleName" for each of my services. Unfortunately, I cannot find any documentation that describes how to do so. My map looks as follow, but the big circle in the middle is actually several different microservices:
I do see that the OwinExtensions package offers the ability to customize some aspects of the telemetry reported but, without a deep knowledge of the internal structure of App Insights telemetry, I can't figure out whether it allows the RoleName to be set and, if so, how to accomplish this. Here's what I've tried so far:
appBuilder.UseApplicationInsights(
new RequestTrackingConfiguration
{
GetAdditionalContextProperties =
ctx =>
Task.FromResult(
new [] { new KeyValuePair<string, string>("cloud_RoleName", ServiceConfiguration.SERVICE_NAME) }.AsEnumerable()
)
}
);
Can anyone tell me how, in this context, I can instruct App Insights to collect telemetry which will cause a Composite Application Map to be built?
The following is the overall doc about TelemetryInitializer which is exactly what you want to set additional properties to the collected telemetry - in this case set Cloud Rolename to enable application map.
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling#add-properties-itelemetryinitializer
Your telemetry initializer code would be something along the following lines...
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
// set role name correctly here.
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
Please try this and see if this helps.

Virus Scanning Uploaded files from Azure Web/Worker Role

We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131

Sync Framework - Check logic before synchronizing

I'm wondering whether I can sync two databases with some logic.
DbSyncTableDescription user = SqlSyncDescriptionBuilder.GetDescriptionForTable("User", sqlServerConn);
DbSyncTableDescription role = SqlSyncDescriptionBuilder.GetDescriptionForTable("Role", sqlServerConn);
DbSyncTableDescription usersInRoles = SqlSyncDescriptionBuilder.GetDescriptionForTable("UsersInRoles", sqlServerConn);
For example, sync administrators to user table. Do not sync staffs.
Thank in advance!
you can define a filter in your scope... see: Walkthrough: Defining Filtered Scope and Provisioning Server Database
or if you want more logic around what needs to be synched, you can intercept the changes in the ChangesSelected event...see: Manipulating the change dataset in Sync Fx

How does domain objects and services interact in DDD?

'So it was in this context we created a Order.adjust() method that delegated the call to OrderAdjust Service.
Having Order.adjust() has an advantage that it makes Order own the adjust operation.'
How is this done? Is the domain service injected?
$order = new Order();
$order->adjust(???);
How can the domain service do operations on domain entities when it's stateless?
If a domain service is injected into an entity, methods can only be called on the reference and thus state must exist?
$service = DomainService();
$entity = DomainEntity();
$entity->operation($service);
// Inside DomainEntity
public function operation(DomainService &$service)
{
// Operations are delegated to the domain service reference
$service->operation();
$service->operation2();
}
$another_entity = AnotherDomainEntity();
// What happened in the first object must be known here
// otherwise what's the point?
$another_entity->operation($service);
Shouldn't it be done like this or in an application service?
$domain_service = new DomainService();
$entity = new DomainEntity();
$another_entity = new AnotherDomainEntity();
$domain_service->performOperation($entity, $another_entity);
How are the operations between domain entities/objects done?
How do domain objects in general communicate? Where are they instantiated?
Code examples would be greatly appreciated.
Source:
http://stochastyk.blogspot.no/2008/05/domain-services-in-domain-driven-design.html
The question is similar to this one: https://softwareengineering.stackexchange.com/a/62193/19252.
The blog post you referenced does a good job on your question. To make it short: If it can be done (and unit-tested!) in a model, do it there. Domain services are rather exception than a rule.
Let me quote that post:
"- Are'nt Services bad and should'nt we use all objects as per OO?
Yes, Services tend to stand orthogonal to Object Oriented Design. [...] There is a huge tendency in the modelling world to use excessive number of services"
As for me, the tendency comes from flaws of .NET/Java persistence architectures, like impossibility to put business logic into setter methods.

Azure Diagnostics - runtime def vs. wadcfg

I'm trying to understand the various ways to configure the Diagnostics in Windows Azure.
So far I've set a diagnostics.wadcfg that is properly used by Azure as I retrieve its content in the xml blob stored by Diagnostics in the wad-control-container (and the tables are updated at the correct refresh rate).
Now I would like to override some fields from the cscfg, in order to boost the log transfer period for all instances, for example (without having to update each wad-control-container file, which will be erased in case of instance recycle btw).
So in my WebRole.Run(), I get a parameter from RoleEnvironment.GetConfigurationSettingValue() and try to apply it to the current config ; but my problem is that the values I read from DiagnosticMonitor.GetDefaultInitialConfiguration() do not correspond to the content of my diagnostics.wadcfg, and setting new values in there doesn't seem to have any effect.
Can anyone explain the relationship between what's taken from diagnostics.wadcfg and the values you can set at run-time?
Thanks
GetDefaultInitialConfiguration() will not return you your current settings, becasue as its name states it takes a default configuration. You have to use the GetCurrentConfiguration method if you need to take the configuration that is in place.
However, if you need to just boost the transfer, you could use for example the Cerebrata's Azure Diagnostics Manager to quickly kick off on-demand transfer of your roles.
You could also use the Windows Azure Diagnostics Management cmdlets for powershell. Check out this article.
Hope this helps!
In order to utilize values in wadcfg file the following code code could be used to access current DiagnosticsMonitorConfiguration:
var cloudStorageAccount = CloudStorageAccount.Parse(
RoleEnvironment.GetConfigurationSettingValue(WADStorageConnectionString));
var roleInstanceDiagnosticManager = cloudStorageAccount.CreateRoleInstanceDiagnosticManager(
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc);
The code by Boris Lipshitz doesn't work now (Breaking Changes in Windows Azure Diagnostics (SDK 2.0)): "the DeploymentDiagnosticManager constructor now accepts a connection string to the storage account instead of a CloudStorageAccount object".
Updated code for SDK 2.0+:
var roleInstanceDiagnosticManager = new RoleInstanceDiagnosticManager(
// Add StorageConnectionString to your role settings for this to work
CloudConfigurationManager.GetSetting("StorageConnectionString"),
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc)

Resources