functions implementation - groovy

I have implemented multiple groovy methods for sending different notifications, but the code breaks-off the functions concept. So i want to rewrite/combine all groovy methods in one single methods so that i can call that one method wherever i need.
Doesn't matter success or failure and i need to pass message as parameter.
static void sendSuccessApplicationNotification(p1,p2,p3,p4) {
def x = Notify(this)
x.triggerBuild("SUCCESSFUL, application ${p1}:${p2} started properly", "${p3}")
x.triggerBuild("SUCCESSFUL, application ${p1}:${p2} started properly", "${p4")
}
Finally above above should be converted to one method.Checked many articles not getting an exact example.

you can use groovy template engine in your generalized function:
import groovy.text.SimpleTemplateEngine
void triggerBuild(a,b){
println "${a} >>>> ${b}"
}
void sendNotification(code, Map parms, List nodes) {
def templates = [
'appY': 'SUCCESSFUL, application ${app}:${ver} started properly',
'appN': 'FAILED, application ${app}:${ver} failed to start properly',
'depY': 'SUCCESSFUL deployment of ${app}:${ver} to ${node}<br>Executed by ${user}',
'depN': 'FAILED deployment of ${app}:${ver} to ${node}<br>Executed by ${user}'
]
def template = templates[code]
assert template
def message = new SimpleTemplateEngine().createTemplate(template).make(parms).toString()
nodes.each{node->
triggerBuild(message, node)
}
}
sendNotification('appY',[app:'myapp', ver:'v123'],['n1','n2'])
the code above will output:
SUCCESSFUL, application myapp:v123 started properly >>>> n1
SUCCESSFUL, application myapp:v123 started properly >>>> n2

Related

ServiceStack with MiniProfiler for .Net 6

I was attempting to add Profiling into ServiceStack 6 with .Net 6 and using the .Net Framework MiniProfiler Plugin code as a starting point.
I noticed that ServiceStack still has Profiler.Current.Step("Step Name") in the Handlers, AutoQueryFeature and others.
What is currently causing me some stress is the following:
In ServiceStackHandlerBase.GetResponseAsync(IRequest httpReq, object request) the Async Task is not awaited. This causes the step to be disposed of the when it reaches the first async method it must await, causing all the subsequent nested steps to not be children. Is there something simple I'm missing here or is this just a bug in a seldom used feature?
In SqlServerOrmLiteDialectProvider most of the async methods make use of an Unwrap function that drills down to the SqlConnection or SqlCommand this causes an issue when attempting to wrap a command to enable profiling as it ignores the override methods in the wrapper in favour of the IHasDbCommand.DbCommand nested within. Not using IHasDbCommand on the wrapping command makes it attempt to use wrapping command but hits a snag because of the forced cast to SqlCommand. Is there an easy way to combat this issue, or do I have to extend each OrmliteDialectProvider I wish to use that has this issue to take into account the wrapping command if it is present?
Any input would be appreciated.
Thanks.
Extra Information Point 1
Below is the code from ServiceStackHandlerBase that appears (to me) to be a bug?
public virtual Task<object> GetResponseAsync(IRequest httpReq, object request)
{
using (Profiler.Current.Step("Execute " + GetType().Name + " Service"))
{
return appHost.ServiceController.ExecuteAsync(request, httpReq);
}
}
I made a small example that shows what I am looking at:
using System;
using System.Threading.Tasks;
public class Program
{
public static async Task<int> Main(string[] args)
{
Console.WriteLine("App Start.");
await GetResponseAsync();
Console.WriteLine("App End.");
return 0;
}
// Async method with a using and non-awaited task.
private static Task GetResponseAsync()
{
using(new Test())
{
return AdditionAsync();
}
}
// Placeholder async method.
private static async Task AdditionAsync()
{
Console.WriteLine("Async Task Started.");
await Task.Delay(2000);
Console.WriteLine("Async Task Complete.");
}
}
public class Test : IDisposable
{
public Test()
{
Console.WriteLine("Disposable instance created.");
}
public void Dispose()
{
Console.WriteLine("Disposable instance disposed.");
}
}
My Desired Result:
App Start.
Disposable instance created.
Async Task Started.
Async Task Complete.
Disposable instance disposed.
App End.
My Actual Result:
App Start.
Disposable instance created.
Async Task Started.
Disposable instance disposed.
Async Task Complete.
App End.
This to me shows that even though the task is awaited at a later point in the code, the using has already disposed of the contained object.
Mini Profiler was coupled to System.Web so isn't supported in ServiceStack .NET6.
To view the generated SQL you can use a BeforeExecFilter to inspect the IDbCommand before it's executed.
This is what PrintSql() uses to write all generated SQL to the console:
OrmLiteUtils.PrintSql();
Note: when you return a non-awaited task it just means it doesn't get awaited at that point, it still gets executed when the return task is eventually awaited.
To avoid the explicit casting you should be able to override a SQL Server Dialect Provider where you'll be able to replace the existing implementation with your own.

Batch up requests in Groovy?

I'm new to Groovy and am a bit lost on how to batch up requests so they can be submitted to a server as a batch, instead of individually, as I currently have:
class Handler {
private String jobId
// [...]
void submit() {
// [...]
// client is a single instance of Client used by all Handlers
jobId = client.add(args)
}
}
class Client {
//...
String add(String args) {
response = postJson(args)
return parseIdFromJson(response)
}
}
As it is now, something calls Client.add(), which POSTs to a REST API and returns a parsed result.
The issue I have is that the add() method is called maybe thousands of times in quick succession, and it would be much more efficient to collect all the args passed in to add(), wait until there's a moment when the add() calls stop coming in, and then POST to the REST API a single time for that batch, sending all the args in one go.
Is this possible? Potentially, add() can return a fake id immediately, as long as the batching occurs, the submit happens, and Client can later know the lookup between fake id and the ID coming from the REST API (which will return IDs in the order corresponding to the args sent to it).
As mentioned in the comments, this might be a good case for gpars which is excellent at these kinds of scenarios.
This really is less about groovy and more about asynchronous programming in java and on the jvm in general.
If you want to stick with the java concurrent idioms I threw together a code snippet you could use as a potential starting point. This has not been tested and edge cases have not been considered. I wrote this up for fun and since this is asynchronous programming and I haven't spent the appropriate time thinking about it, I suspect there are holes in there big enough to drive a tank through.
That being said, here is some code which makes an attempt at batching up the requests:
import java.util.concurrent.*
import java.util.concurrent.locks.*
// test code
def client = new Client()
client.start()
def futureResponses = []
1000.times {
futureResponses << client.add(it as String)
}
client.stop()
futureResponses.each { futureResponse ->
// resolve future...will wait if the batch has not completed yet
def response = futureResponse.get()
println "received response with index ${response.responseIndex}"
}
// end of test code
class FutureResponse extends CompletableFuture<String> {
String args
}
class Client {
int minMillisLullToSubmitBatch = 100
int maxBatchSizeBeforeSubmit = 100
int millisBetweenChecks = 10
long lastAddTime = Long.MAX_VALUE
def batch = []
def lock = new ReentrantLock()
boolean running = true
def start() {
running = true
Thread.start {
while (running) {
checkForSubmission()
sleep millisBetweenChecks
}
}
}
def stop() {
running = false
checkForSubmission()
}
def withLock(Closure c) {
try {
lock.lock()
c.call()
} finally {
lock.unlock()
}
}
FutureResponse add(String args) {
def future = new FutureResponse(args: args)
withLock {
batch << future
lastAddTime = System.currentTimeMillis()
}
future
}
def checkForSubmission() {
withLock {
if (System.currentTimeMillis() - lastAddTime > minMillisLullToSubmitBatch ||
batch.size() > maxBatchSizeBeforeSubmit) {
submitBatch()
}
}
}
def submitBatch() {
// here you would need to put the combined args on a format
// suitable for the endpoint you are calling. In this
// example we are just creating a list containing the args
def combinedArgs = batch.collect { it.args }
// further there needs to be a way to map one specific set of
// args in the combined args to a specific response. If the
// endpoint responds with the same order as the args we submitted
// were in, then that can be used otherwise something else like
// an id in the response etc would need to be figured out. Here
// we just assume responses are returned in the order args were submitted
List<String> combinedResponses = postJson(combinedArgs)
combinedResponses.indexed().each { index, response ->
// here the FutureResponse gets a value, can be retrieved with
// futureResponse.get()
batch[index].complete(response)
}
// clear the batch
batch = []
}
// bogus method to fake post
def postJson(combinedArgs) {
println "posting json with batch size: ${combinedArgs.size()}"
combinedArgs.collect { [responseIndex: it] }
}
}
A few notes:
something needs to be able to react to the fact that there were no calls to add for a while. This implies a separate monitoring thread and is what the start and stop methods manage.
if we have an infinite sequence of adds without pauses, you might run out of resources. Therefore the code has a max batch size where it will submit the batch even if there is no lull in the calls to add.
the code uses a lock to make sure (or try to, as mentioned above, I have not considered all potential issues here) we stay thread safe during batch submissions etc
assuming the general idea here is sound, you are left with implementing the logic in submitBatch where the main problem is dealing with mapping specific args to specific responses
CompletableFuture is a java 8 class. This can be solved using other constructs in earlier releases, but I happened to be on java 8.
I more or less wrote this without executing or testing, I'm sure there are some mistakes in there.
as can be seen in the printout below, the "maxBatchSizeBeforeSubmit" setting is more a recommendation that an actual max. Since the monitoring thread sleeps for some time and then wakes up to check how we are doing, the threads calling the add method might have accumulated any number of requests in the batch. All we are guaranteed is that every millisBetweenChecks we will wake up and check how we are doing and if the criteria for submitting a batch has been reached, then the batch will be submitted.
If you are unfamiliar with java Futures and locks, I would recommend you read up on them.
If you save the above code in a groovy script code.groovy and run it:
~> groovy code.groovy
posting json with batch size: 153
posting json with batch size: 234
posting json with batch size: 243
posting json with batch size: 370
received response with index 0
received response with index 1
received response with index 2
...
received response with index 998
received response with index 999
~>
it should work and print out the "responses" received from our fake json submissions.

Calling WCF endpoint from Azure Function

I have a v2 Azure function written in dotnet core. I want to call in to a legacy WCF endpoint like this:
basicHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
var factory = new ChannelFactory<IMyWcfService>(basicHttpBinding, null);
factory.Credentials.UserName.UserName = "foo";
factory.Credentials.UserName.Password = "bar";
IMyWcfService myWcfService = factory.CreateChannel(new EndpointAddress(new Uri(endpointUri)));
ICommunicationObject communicationObject = myWcfService as ICommunicationObject;
if ( communicationObject != null)
communicationObject.Open();
try
{
return await myWcfService.GetValueAsync());
}
finally
{
if (communicationObject != null)
communicationObject.Close();
factory.Close();
}
This works when I call it from a unit test, so I know the code is fine running on Windows 10. However, when I debug in the func host, I get missing dependency exceptions. I copied two dlls : System.Private.Runtime and System.Runtime.WindowsRuntime into the in directory, but have reached the end of the line with the exception
Could not load type 'System.Threading.Tasks.AsyncCausalityTracer' from assembly 'mscorlib'
This seems like a runtime rabbit hole that I don't want to go down. Has anyone successfully called a Wcf service from an Azure function? Which dependencies should I add to get the call to work?

Worker stuck in a Sandbox?

Trying to figure out why I can login with my rest API just fine on the main thread but not in a worker. All communication channels are operating fine and I am able to load it up no problem. However, when it tries to send some data it just hangs.
[Embed(source="../bin/BGThread.swf", mimeType="application/octet-stream")]
private static var BackgroundWorker_ByteClass:Class;
public static function get BackgroundWorker():ByteArray
{
return new BackgroundWorker_ByteClass();
}
On a test script:
public function Main()
{
fBCore.init("secrets", "my-firebase-id");
trace("Init");
//fBCore.auth.addEventListener(FBAuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleIOError);
fBCore.auth.email_login("admin#admin.admin", "password");
}
private function handleIOError(e:IOErrorEvent):void
{
trace("IO error");
trace(e.text); //Nothing here
}
private function hanldeFBSuccess(e:AuthEvent):void
{
trace("Main login success.");
trace(e.message);//Complete success.
}
When triggered by a class via an internal worker channel passed from Main on init:
Primordial:
private function handleLoginClick(e:MouseEvent):void
{
login_mc.buttonMode = false;
login_mc.play();
login_mc.removeEventListener(MouseEvent.CLICK, handleLoginClick);
log("Logging in as " + email_mc.text_txt.text);
commandChannel.send([BGThreadCommand.LOGIN, email_mc.text_txt.text, password_mc.text_txt.text]);
}
Worker:
...
case BGThreadCommand.LOGIN:
log("Logging in with " + message[1] + "::" + message[2]); //Log goes to a progress channel and comes to the main thread reading the outputs successfully.
fbCore.auth.email_login(message[1], message[2]);
fbCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, loginSuccess); //Nothing
fbCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleLoginIOError); //Fires
break;
Auth Rest Class: https://github.com/sfxworks/FirebaseREST/blob/master/src/net/sfxworks/firebaseREST/Auth.as
Is this a worker limitation or a security sandbox issue? I have a deep feeling it is the latter of the two. If that's the case how would I load the worker in a way that also gives it the proper permissions to act?
Completely ignored the giveAppPrivelages property in the createWorker function. Sorry Stackoverflow. Sometimes I make bad questions when I get little (or none in this case) sleep the night before.

Terminate existing pool when all work is done

Alright, brand new to gpars so please forgive me if this has an obvious answer.
Here is my scenario. We currently have a piece of our code wrapped in a Thread.start {} block. It does this so it can send messages to an message queue in the background and not block the user request. An issue we have recently ran into with this is for large blocks of work, it is possible for the users to perform another action which would cause this block to execute again. As it is threaded, it is possible for the second batch of messages to get sent before the first causing corrupted data.
I would like to change this process to work as a queue flow with gpars. I've seen examples of creating pools such as
def pool = GParsPool.createPool()
or
def pool = new ForkJoinPool()
and then using the pool as
GParsPool.withExistingPool(pool) {
...
}
This seems like it would account for the case that if the user performs an action again, I could reuse the created pool and the actions would not be performed out of order, provided I have a pool size of one.
My question is, is this the best way to do this with gpars? And furthermore, how do I know when the pool is finished all of its work? Does it terminate when all the work is finished? If so, is there a method that can be used to check if the pool has finished/terminated to know I need a new one?
Any help would be appreciated.
No, explicitly created pools do not terminate by themselves. You have to call shutdown() on them explicitly.
Using withPool() {} command, however, will guarantee that the pool is destroyed once the code block is finished.
Here is the current solution we have to our issue. It should be noted that we followed this route due to our requirements
Work is grouped by some context
Work within a given context is ordered
Work within a given context is synchronous
Additional work for a context should execute after the preceding work
Work should not block the user request
Contexts are asynchronous between each other
Once work for a context is finished, the context should clean up after itself
Given the above, we've implemented the following:
class AsyncService {
def queueContexts
def AsyncService() {
queueContexts = new QueueContexts()
}
def queue(contextString, closure) {
queueContexts.retrieveContextWithWork(contextString, true).send(closure)
}
class QueueContexts {
def contextMap = [:]
def synchronized retrieveContextWithWork(contextString, incrementWork) {
def context = contextMap[contextString]
if (context) {
if (!context.hasWork(incrementWork)) {
contextMap.remove(contextString)
context.terminate()
}
} else {
def queueContexts = this
contextMap[contextString] = new QueueContext({->
queueContexts.retrieveContextWithWork(contextString, false)
})
}
contextMap[contextString]
}
class QueueContext {
def workCount
def actor
def QueueContext(finishClosure) {
workCount = 1
actor = Actors.actor {
loop {
react { closure ->
try {
closure()
} catch (Throwable th) {
log.error("Uncaught exception in async queue context", th)
}
finishClosure()
}
}
}
}
def send(closure) {
actor.send(closure)
}
def terminate(){
actor.terminate()
}
def hasWork(incrementWork) {
workCount += (incrementWork ? 1 : -1)
workCount > 0
}
}
}
}

Resources