In AppHost.Configure I set a global JSON config JsConfig.TreatEnumAsInteger = false; and have a simple handler with two GET endpoints
public object Get(GetDayOfWeekAsText request)
{
return new GetDayOfWeekResponse();
}
public object Get(GetDayOfWeekAsInt request)
{
return new HttpResult(new GetDayOfWeekResponse())
{
ResultScope = () => JsConfig.With(new Config
{
TreatEnumAsInteger = true
})
};
}
Depending on the request I call first all subsequent requests will serialize enums as text or integer until the application is recycled. Explicitly setting TreatEnumAsInteger in GetDayOfWeekAsText has no effect.
Thanks!
This should now be resolved with this commit.
This change is available from v5.4.1 that's now available on MyGet.
Related
I am currently making an API that will be hosted via Azure Functions. I'm running .net core 3.1. The way I have the project routed right now is defining the accepted methods as a parameter for the HttpTrigger then I have an if statement for determining how the endpoint was called. I am attempting to use the OpenAPI package to create API definitions, but when I assign Methods to the function, the Swagger document only picks up the first Method listed (PUT). I am unsure of the intended structure / usage of endpoints that have multiple possible request methods.
See code below. (OpenAPI tags are placeholder descriptions)
namespace Dyn.Sync.Func.PractifiSync
{
public class Prospect
{
[FunctionName("Prospect")]
[OpenApiOperation(operationId: "Run", tags: new[] { "name" })]
[OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In = OpenApiSecurityLocationType.Query)]
[OpenApiParameter(name: "name", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **Name** parameter")]
[OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "text/plain", bodyType: typeof(string), Description = "The OK response")]
public async Task<IActionResult> Create([HttpTrigger(AuthorizationLevel.Anonymous, "post", "put", Route = null)] HttpRequest req, ILogger log)
{
string primarySecretsContainerName = "Main";
DynUser user = await DynAuthManager.CreateDynUserAsync(req);
DynProspect prospect = JsonSerializer.Deserialize<DynProspect>(req.Body);
PFIConnection pfiConnector = PFIConnectionsCache.GetConnection(user, DynSecretsCache.GetSecretsContainer(primarySecretsContainerName));
try
{
if (!pfiConnector.IsConnected) { await pfiConnector.Connect(); }
if (req.Method == "POST") { return await pfiConnector.CreateProspect(prospect); }
if (req.Method == "PUT") { return await pfiConnector.UpdateProspect(prospect); }
else { return new ObjectResult("Invalid method.") { StatusCode = 400 }; }
}
catch (Exception ex)
{
DynError dynError = new DynError(ex);
log.LogError(ex, "Exception " + dynError.RequestID.ToString() + " occured.");
return (IActionResult)new ExceptionResult(ex, true);
}
}
}
}
My question is this: When the swagger document is created, it only lists whatever method I defined first (in other words, it ignores the "put" method). What is the intended way to structure an API when creating it in Azure functions? I tried creating a separate method in the same class for each HTTP method that it would accept, but then I couldn't even hit the endpoint when making requests. Does microsoft want us to create a new function class for each endpoint? So then instead of:
PUT http://myapi.azure.com/api/prospect
POST http://myapi.azure.com/api/prospect
it would be:
PUT http://myapi.azure.com/api/updateprospect
POST http://myapi.azure.com/api/prospect
I should note that this will eventually live under and Azure API Management instance, which makes me even more worried to implement it in a "one function per method" fashion as when loading azure functions the way I have done it, it correctly assigns the methods in APIM and I'd prefer not to have to manually configure it.
I have been searching for documentation on this specific issue with no luck. Anyone have any ideas how Microsoft intended this to be used?
Thanks.
I'm having an Azure function .NET core 2.1 with various endpoints.
For Authentication I'm having an external security provider from where I need to retrieve the discovery document (metadata) before being able to validate a request's token.
The document is currently getting loaded every time a single request is made:
var discoveryDocument = await configurationManager.GetConfigurationAsync();
Now I naturally don't want to make this call for every single request due to the fact that the document changes rarely.
On the other hand Azure functions are stateless. I heard of ConfigurationManager.GetConfigurationAsync() is caching the retrieved data somehow, though I couldn't find more information on this. Currently I ended up with a function which gets triggered on startup once, retrieves the document and stores it in the local filesystem. So when a request is made, I'm reading the file again just to avoid another request for getting the public keys.
Any experience on this?
Solution:
I could solve it with a static class as #juunas suggested.
For every single function I am re-using the ValidateToken() method. The call for the discovery document is made every once in a while (when IConfigurationManager thinks it needs to be refreshed) because it is getting cached automatically.
class AuthConfig
{
static readonly IConfigurationManager<OpenIdConnectConfiguration> _configurationManager;
static AuthConfig()
{
_configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>(
"<CONFIGURL>",
new OpenIdConnectConfigurationRetriever(),
new HttpDocumentRetriever());
}
public static async Task<MyUserEntity> ValidateToken(string token)
{
var discoveryDocument = await _configurationManager.GetConfigurationAsync(CancellationToken.None);
var validationParameters = new TokenValidationParameters
{
RequireExpirationTime = true,
RequireSignedTokens = true,
ValidateIssuerSigningKey = true,
IssuerSigningKeys = discoveryDocument.SigningKeys,
ValidateLifetime = true,
ValidateAudience = false,
ValidateIssuer = true,
ValidIssuer = discoveryDocument.Issuer
};
try
{
var claimsPrincipal = new JwtSecurityTokenHandler().ValidateToken(token, validationParameters, out var rawValidatedToken);
var user = ParseMyUserEntity(claimsPrincipal);
return user;
}
catch
{
return null;
}
}
}
You can store the data in a static field (which can be in a separate static class too).
That should work as an in-memory cache on that function instance at least.
If the function gets scaled to multiple instances, their caches will be separated.
I'm developing a JWT-based multi-tenancy system using ServiceStack. The JWT token contains shard information, and I use JwtAuthProvider to translate the JWT token to session object following instructions at http://docs.servicestack.net/jwt-authprovider.
Now, I want to use ServiceStack MQ for asynchronous processing. The MQ request needs to be aware of the shard information, so I populate the request context before executing it as follow
mqServer.RegisterHandler<EmployeeAssignedToProject>(m =>
{
var req = new BasicRequest { Verb = HttpMethods.Post };
var sessionKey = SessionFeature.GetSessionKey(m.GetBody().SessionId);
var session = HostContext.TryResolve<ICacheClient>().Get<Context>(sessionKey);
req.Items[Keywords.Session] = session;
var response = ExecuteMessage(m, req);
return response;
});
Here, Context is my custom session class. This technique is stemmed from the instruction at http://docs.servicestack.net/messaging#authenticated-requests-via-mq. Since I execute the message within the context of req, I reckon that I should then be able to resolve Context as follow
container.AddScoped<Context>(c =>
{
var webRequest = HostContext.TryGetCurrentRequest();
if (webRequest != null)
{
return webRequest.SessionAs<Context>();
} else
{
return HostContext.RequestContext.Items[Keywords.Session] as Context;
}
});
However, HostContext.RequestContext.Items is always empty. So the question is, how to populate HostContext.RequestContext.Items from within message handler registration code?
I've tried to dig a little bit into ServiceStack code and found that the ExecuteMessage(IMessage dto, IRequest req) in ServiceController doesn't seem to populate data in RequestContext. For my case, it is a bit too late to get session inside service instance, as a service instance depends on some DB connections whose shard info is kept in session.
The same Request Context instance can't be resolved from the IOC. The Request Context instance is created in the MQ's RegisterHandler<T>() where you can add custom data in the IRequest.Items property, e.g:
mqServer.RegisterHandler<EmployeeAssignedToProject>(m =>
{
var req = new BasicRequest { Verb = HttpMethods.Post };
req.Items[MyKey] = MyValue; //Inject custom per-request data
//...
var response = ExecuteMessage(m, req);
return response;
});
This IRequest instance is available throughout the Request pipeline and from base.Request in your Services. It's not available from your IOC registrations so you will need to pass it in as an argument when calling your dependency, e.g:
public class MyServices : Service
{
public IDependency MyDep { get; set; }
public object Any(MyRequest request) => MyDep.Method(base.Request, request.Id);
}
I'm trying to set an option on JsConfig for a single async method on JsonServiceClient by using JsConfigScope, but it does not seem to work. What am I doing wrong? Is there another way to do this?
var client = new JsonServiceClient(baseUrl);
using (var scope = JsConfig.BeginScope())
{
scope.EmitCamelCaseNames = true;
return client.PostAsync<SomeResponse>(url, request);
}
You can't use a using scope with an async request since the scope will be disposed before the async service has completed. You would need to await the response, i.e:
using (JsConfig.With(new Config { TextCase = TextCase.CamelCase }))
{
return await client.PostAsync<SomeResponse>(url, request);
}
You can use the lower-level HTTP Utils to split the request serialization and response deserialization outside of the async call which is the approach used in the StripeGateway.
Otherwise you could potentially use the Response filter to dispose of the scoped configuration, e.g:
var scope = JsConfig.With(new Config { TextCase = TextCase.CamelCase });
var client = new JsonServiceClient(BaseUrl) {
ResponseFilter = httpRes => scope.Dispose()
};
return client.PostAsync<SomeResponse>(url, request);
I have a client app which monitors the changes in real-time by establishing a long-live HTTP connection to server.
In ASP.NET WebAPI, the server can take use PushStreamContent to keep the connection for a long time and send response once there is an update.
But in ServiceStack, seems there is no similar stuff.
I looked at the sample code of Different ways of returning an ImageStream
IStreamWriter.WriteTo method is only called once, and I can't use async IO operation to avoid blocking server thread.
Is there a way to send progressive response to client asynchronously?
here is sample code in WebAPI which does the job
public static async Task Monitor(Stream stream, HttpContent httpContent, TransportContext transportContext)
{
ConcurrentQueue<SessionChangeEvent> queue = new ConcurrentQueue<SessionChangeEvent>();
TaskCompletionSource<object> tcs = new TaskCompletionSource<object>();
Action<SessionChangeEvent> callback = (evt) =>
{
queue.Enqueue(evt);
tcs.TrySetResult(null);
};
OnSessionChanged += callback;
try
{
using (StreamWriter sw = new StreamWriter(stream, new UTF8Encoding(false)))
{
await sw.WriteLineAsync(string.Empty);
await sw.FlushAsync();
await stream.FlushAsync();
for (; ; )
{
Task task = tcs.Task;
await Task.WhenAny(task, Task.Delay(15000));
if (task.Status == TaskStatus.RanToCompletion)
{
tcs = new TaskCompletionSource<object>();
SessionChangeEvent e;
while (queue.TryDequeue(out e))
{
string json = JsonConvert.SerializeObject(e);
await sw.WriteLineAsync(json);
}
task.Dispose();
}
else
{
// write an empty line to keep the connection alive
await sw.WriteLineAsync(string.Empty);
}
await sw.FlushAsync();
await stream.FlushAsync();
}
}
}
catch (CommunicationException ce)
{
}
finally
{
OnSessionChanged -= callback;
}
}
Writing to a long-running connection is exactly what Server Events does. You can look at the implementation for ServerEventsHandler or ServerEventsHeartbeatHandler to see it's implemented in ServiceStack.
Basically it just uses a custom ASP.NET IHttpAsyncHandler which can be registered at the start of ServiceStack's Request Pipeline with:
appHost.RawHttpHandlers.Add(req => req.PathInfo.EndsWith("/my-stream")
? new MyStreamHttpHandler()
: null);
Where MyStreamHttpHandler is a custom HttpAsyncTaskHandler, e.g:
public class MyStreamHttpHandler : HttpAsyncTaskHandler
{
public override bool RunAsAsync() { return true; }
public override Task ProcessRequestAsync(
IRequest req, IResponse res, string operationName)
{
//Write any custom request filters and registered headers
if (HostContext.ApplyCustomHandlerRequestFilters(req, res))
return EmptyTask;
res.ApplyGlobalResponseHeaders();
//Write to response output stream here, either by:
res.OuputStream.Write(...);
//or if need access to write to underlying ASP.NET Response
var aspRes = (HttpResponseBase)res.OriginalResponse;
aspRes.OutputStream...
//After you've finished end the request with
res.EndHttpHandlerRequest(skipHeaders: true);
return EmptyTask;
}
}
The ApplyCustomHandlerRequestFilters() and ApplyGlobalResponseHeaders() at the start gives other plugins a chance to validate/terminate the request or add any HTTP Headers (e.g. CorsFeature).
Have a look at ServerEvents. If I understood you right, this is what you are looking for.