I have created an Azure App Configuration and added several Feature Gates to it
Following the documentation I added a different label to each one to represent a different environment.
The following snippet in Program.cs works when I do not define labels and the features
config.AddAzureAppConfiguration(options =>
{
options.Connect(connectionString)
.Select(KeyFilter.Any)
.ConfigureRefresh(refresh =>
refresh.Register(KeyFilter.Any, true)
.SetCacheExpiration(TimeSpan.FromSeconds(10)));
options.UseFeatureFlags();
});
});
By works i mean
The features load and changing the feature in the Azur App Config causes the changes to hot change within 10 - 15 seconds.
For each Feature I have the following labels "Development", "Staging", "Production"
I can make the correct keys load on start up by using the following code
config.AddAzureAppConfiguration(options =>
{
options.Connect(connectionString)
.Select(KeyFilter.Any, "Staging") // For example
.ConfigureRefresh(refresh =>
refresh.Register(KeyFilter.Any, true)
.SetCacheExpiration(TimeSpan.FromSeconds(10)));
options.UseFeatureFlags();
});
});
This will load the correct keys, but updating the values in the Azure portal doesnt refresh.
Tried several different techniques, but the refresh never updates if i have specified a label.
Is this a limitation or is my terminology wrong.
I have also tried several other things such as
refresh.Register(KeyFilter.Any,"Staging", true)
refresh.Register("SpecificKey","Staging", true)
But the refresh doesnt seem to work.
Any help would be great
Thank you
Added a bit more investigation
Here is an example of my app store, I am only using FeatureGates
When I have any label on the refresh stops working although the initial load works.
Here is my current config code
config.AddAzureAppConfiguration(options =>
{
options.Connect(connectionString);
options.Select(KeyFilter.Any, "Staging");
options.UseFeatureFlags(featureFlagOptions =>
{
featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(30);
});
});
If I remove the label "Staging" from the feature gates and the code then every 30 seconds my app refreshes.
I have also demonstrated this behaviour on a console app which i will upload
Has anyone found a way to get featuremanagement flags to auto refresh if you have a label on them ?
Did you miss below in the document?
When no parameter is passed to the UseFeatureFlags method, it loads all feature flags with no label in your App Configuration store. The default refresh expiration of feature flags is 30 seconds. You can customize this behavior via the FeatureFlagOptions parameter. For example, the following code snippet loads only feature flags that start with TestApp: in their key name and have the label dev. The code also changes the refresh expiration time to 5 minutes. Note that this refresh expiration time is separate from that for regular key-values.
options.UseFeatureFlags(featureFlagOptions =>
{
featureFlagOptions.Select("TestApp:*", "dev");
featureFlagOptions.CacheExpirationInterval = TimeSpan.FromMinutes(5);
});
Related
I'm trying to create dynamic pages based on a database that grows by the minute. Therefor it isn't an option to use createPage and build several times a day.
I'm using onCreatePage here to create pages which works fine for my first route, but when I try to make an English route somehow it doesn't work.
gatby-node.js:
exports.onCreatePage = async ({ page, actions: { createPage } }) => {
if (page.path.match(/^\/listing/)) {
page.matchPath = '/listing/:id'
createPage(page)
}
if (page.path.match(/^\/en\/listing/)) {
page.matchPath = '/en/listing/:id'
createPage(page)
}
}
What I'm trying to achieve here is getting 2 dynamic routes like:
localhost:8000/listing/123 (this one works)
localhost:8000/en/listing/123 (this one doesn't work)
My pages folder looks like this:
pages
---listing.tsx
---en/
------listing.tsx
Can anyone see what I'm doing wrong here?
--
P.S. I want to use SSR (available since Gatsby v4) by using the getServerData() in the templates for these pages. Will that work together with pages created dynamically with onCreatePage or is there a better approach?
According to what we've discussed in the comment section: the fact that the /en/ path is never created, hence is not entering the following condition:
if (page.path.match(/^\/en\/listing/)) {
page.matchPath = '/en/listing/:id'
createPage(page)
}
Points me to think that the issue is on your createPages API rather than onCreatePage, which means that your english page is not even created.
Keep in mind that onCreatePage API is a callback called when a page is created, so it's triggered after createPages.
If you add a console.log(page.path) you shouldn't see the English page in the IDE/text editor console so try debugging how are you creating the /en/ route because it seems that onCreatePage doesn't have any problem.
Goal
A request comes in and is handled by the Azure Functions run-time. By default it creates a Request entry, and a bunch of Trace entries in Application Insights. I want to add a custom dimension to that top level request item (on a per-request basis) so I can use it for filtering/analysis later.
Query for -requests- on Application Insights
Resulting list of requests including custom dimensions column
The Azure Functions runtime adds a few custom dimensions already. I want to add a few of my own.
Approach
The most promising approach I've found is show below (taken from here https://github.com/microsoft/ApplicationInsights-node.js/issues/392)
appInsights.defaultClient.addTelemetryProcessor(( envelope, context ) => {
var data = envelope.data.baseData;
data.properties['mykey'] = 'myvalue';
return true;
});
However, I find that this processor is only called for requests that I initialise within my function. For example, if I make an HTTP request to another service, then details of that request will be passed thru the processor and I can add custom properties to it. But the main function does not seem to pass thru here. So I can't add my custom property.
I also tried this
defaultClient.commonProperties['anotherCustomProp'] = 'bespokeProp2'
Same problem. The custom property doesn't arrive in application insights. I've played with many variations on this and it appears that the logging done by azure-functions is walled off from anything I can do within my code.
The best workaround I have right now, is to call trackRequest manually. This is okay, except I end up with each request logged twice in application insights, one by the framework and one by me. And both need to have the same operation_id otherwise I can't find the associated trace/error items. So I'm having to extract the operationId in a slightly hacky way. This may be fine, my knowledge of application insights is pretty naive at this point.
import { setup, defaultClient } from 'applicationinsights' // i have to import the specific functions, because "import ai from applicationinsights" returns null
// call this because otherwise defaultClient is null.
// Some examples call start(), I've tried with and without this.
// I think the start() function must be useful when you're adding application-insights to a project fresh, whereas I think the azure-functions run-time must be doing this already.
setup()
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
// Extract the operation id from the traceparent as per w3 standard https://www.w3.org/TR/trace-context/.
const operationId = context.traceContext.traceparent.split('-')[1]
var operationIdOverride = { 'ai.operation.id': operationId }
// Create my own trackRequest entry
defaultClient.trackRequest({
name: 'my func name',
url: context.req.url.split('?')[0],
duration: 123,
resultCode: 200,
success: true,
tagOverrides: operationIdOverride,
properties: {
customProp: 'bespokeProp'
}
})
The Dream
Our C# cousins seem to have an array of options, like Activity.Current.tags and the ability to add TelemetryInitializer. However it looks like what I'm trying to do is supported, I'm just not finding the right combination of commands! Is there something similar for javascript/typescript/nodejs, where I can just add a tag on a per-request basis? Along the lines of context.traceContext.attributes['myprop'] = 'myValue'
Alternative
Alternatively, instrumenting my code using my own TelemetryClient (rather than the defaultClient) using trackRequest, trackTrace, trackError etc, is not a very big job and should work well - that would be more explicit. Should I just do that? Is there a way to disable the azure functions tracking - or perhaps I just leave that as something running side-by-side.
BACKGROUND:
I have an existing site which makes use of the following technologies:
ASP.NET MVC5,
KnockoutJS,
Kendo UI 2014.1.318
Web API 2
OData v3
There are many Kendo Grids on my site, all working perfectly fine. Until now, that is... when I started integrating Durandal.
PROBLEM:
95% of the grids are perfectly fine, but there are 2 of them, which are getting data from an OData v3 Action (POST action). For example:
[EnableQuery(AllowedQueryOptions = AllowedQueryOptions.All)]
[HttpPost]
public IQueryable<ServiceInfoResult> GetServices(ODataActionParameters parameters)
{
}
Yes, it's unusual, but for reasons I won't go into, I have data coming from an OData (POST) Action. The grids work fine usually, I just have to make sure to set the following:
schema: {
data: function (data) {
return data.value;
},
total: function (data) {
//return data["odata.count"]; // this is the one normally used for other grids, but not here...
// instead, I need to use the following and do paging locally, which is fine, since there's a VERY small number of records, so there's no issue.
return data.value.length;
},
//etc
}
Anyway, now that I am using Durandal/RequireJS, something weird is happening...; on first load everything looks perfectly fine, but when clicking on a page (2, 3, 4, etc...), then the grid shows ALL of the records even though at the footer of the grid it still says showing 11-20 of 238 items and has the page numbers.
Again, I say it was working fine before. Does anyone have any idea as to why this might be happening and what I can do about it?
UPDATE
I just discovered something. With all my grids, I am using a property on the viewModel to specify the gridPageSize. Basically, I am doing this:
var ViewModel = function () {
var self = this;
self.gridPageSize = 10;
//etc
self.attached = function () {
//etc
self.gridPageSize = $("#GridPageSize").val(); //this is a hidden field I am using to get the page size which was set in admin area
//etc
}
//etc
and in the grid configuration, I have:
pageSize: self.gridPageSize,
serverPaging: false,
serverFiltering: false,
serverSorting: false,
//etc
This works perfectly fine with all the grids that use server-side paging. However, this grid is using client-side paging. What I did now was simply the following:
pageSize: 10,
and now it works as expected. This is a good workaround, but not perfect, as I cannot dynamically set the page size. Any ideas as to what's happening here?
You can dynamically change the pageSize of your grid. All you need to do is call the pageSize() method of the grid's dataSource and it should work as expected.
$('#grid').data('kendoGrid').dataSource.pageSize(100);
This is no longer an issue, because OData 5.7 now returns #odata.count for actions/functions returning collections of complex types. Previously, I turned off server side paging and filtering.. just got all the data on the client, which I didn't like, but had no choice... but now I can use server side paging and don't need to care about this weird problem anymore. More info on the OData fix here: https://github.com/OData/WebApi/issues/484#issuecomment-153929767
How do I expire the administrator session after a period of inactivity in SilverStripe 3.1.x? Is there a config option for this?
I searched and found the following code snippet, which, when placed in the Page_Controller class, works for frontend users, but totally ineffective in the administration area.
public function init() {
parent::init();
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing purposes
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
$this->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
After over 1 minute of inactivity, the admin user is still logged in and the session has not timed out.
For people like myself still searching for a solution to this, there's a much simpler alternative. As it turns out, the only good solution at the moment is indeed to disable LeftAndMain.session_keepalive_ping and simon_w's solution will not work precisely because of this ping. Also, disabling this ping should not cause data loss (at least not for SilverStripe 3.3+) because the user will be presented with an overlay when they attempt to submit their work. After validating their credentials, their data will be submitted to the server as usual.
Also, for anyone who (like myself) was looking for a solution on how to override the CMS ping via LeftAndMain.session_keepalive_ping using _config.yml keep reading.
Simple Fix: In your mysite/_config.php, simply add:
// Disable back-end AJAX calls to /Security/ping
Config::inst()->update('LeftAndMain', 'session_keepalive_ping', false);
This will prevent the CMS from refreshing the session which will naturally expire on it's own behind the scenes (and will not be submitted on the next request). That way, the setting you may already have in _config.yml dictating the session timeout will actually be respected and allowing you to log out a user who's been inactive in the CMS. Again, data should not be lost for the reasons mentioned in the first paragraph.
You can optionally manually override the session timeout value in mysite/_config/config.yml to help ensure it actually expires at some explicit time (e.g. 30min below):
# Set session timeout to 30min.
Session:
timeout: 1800
You may ask: Why is this necessary?
Because, while the bug (or functionality?) preventing you from overriding the LeftAndMain.session_keepalive_ping setting to false was supposedly fixed in framework PR #3272 it was actually reverted soon thereafter in PR #3275
I hope this helps anyone else confused by this situation like I was!
This works, but would love to hear from the core devs as to whether or not this is best practice.
In mysite/code I created a file called MyLeftAndMainExtension.php with the following code:
<?php
class MyLeftAndMainExtension extends Extension {
public function onAfterInit() {
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
Controller::curr()->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
}
Then I added the following line to mysite/_config.php
LeftAndMain::add_extension('MyLeftAndMainExtension');
That seemed to do the trick. If you prefer to do it through yml, you can add this to mysite/_config/config.yml :
LeftAndMain:
extensions:
- MyLeftAndMainExtension
The Session.timeout config option is available for setting an inactivity timeout for sessions. However, setting it to anything greater than 5 minutes isn't going to work in the CMS out of the box.
Having a timeout in the CMS isn't productive, and your content managers will end up ruing the timeout. This is because it is possible (and fairly common) to be active in the CMS, while appearing inactive from the server's perspective (say, you're writing a lengthy article). As such, the CMS is designed to send a ping back to the server every 5 minutes to ensure users are logged in. While you can stop this behaviour by setting the LeftAndMain.session_keepalive_ping config option to false, I strongly recommended against doing so.
Ok, so I am using Node.js and Azure Blob Storage to handle some file uploads.
When a person uploads an image I would like to show them a thumbnail of the image. The upload works great and I have it stored in my blob.
I used this fine link (Generating Azure Shared Access Signatures with BlobService.getBlobURL() in Azure SDK for Node.js) to help me create this code to create a share access temporary url.
process.env['AZURE_STORAGE_ACCOUNT'] = "[MY_ACCOUNT_NAME]";
process.env['AZURE_STORAGE_ACCESS_KEY'] = "[MY_ACCESS_KEY]";
var azure = require('azure');
var blobs = azure.createBlobService();
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: Date.now(),
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
This creates a url just fine.
Something like this : https://[ACCOUNT_NAME].blob.core.windows.net:443/[CONTAINER_NAME]/[BLOB_NAME]?st=2013-12-13T17%3A33%3A40Z&se=2013-12-13T18%3A33%3A40Z&sr=b&sp=r&sig=Tplf5%2Bg%2FsDQpRafrtVZ7j0X31wPgZShlwjq2TX22mYM%3D
The problem is that when I take the temp url and plug it into my browser it will only download the image rather than view it (in this case it is a simple jpg file).
This translates to my code that I can't seem to view it in an tag...
The link is right and downloads the right file...
Is there something I need to do to view the image rather than download it?
Thanks,
David
UPDATE
Ok, so I found this article:
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/b8759195-f490-420b-a587-2bb614366ad2/embedding-images-from-blob-storage-in-ssrs-report-does-not-work
Basically it told me that wasn't setting the file type when uploading it so the browser didn't know what to do with it.
I used code from here: http://www.snip2code.com/Snippet/8974/NodeJS-Photo-Upload-with-Azure-Storage/
This allowed me to upload it correctly and it now views properly in the browser.
The issue I am having now is that when I put the tempUrl into an img tag I get this error:
Failed to load resource: the server responded with a status of 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
This is the exact same link that if I post it to my browser it works just fine...why can't I show it in an image tag?
UPDATE 2
Ok, so as a stupid test I put in a 7 second delay from when my page loads and when the img tag gets the source from the temp url. This seems to fix the problem (most of the time), but it is, obviously, a crappy solution even when it works...
At least this verifies that, because it works sometimes, my markup is at least correct.
I can't, for the life of me, figure out why a delay would make a bit of difference...
Thoughts?
UPDATE 3
Ok, based on a comment below, I have tried to set my start time to about 20 minutes in the past.
var start = moment().add(-20, 'm').format('ddd MMM DD YYYY HH:mm:ss');
var tempUrl = blobs.getBlobUrl(Container, Filename, { AccessPolicy: {
Start: start,
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
I made my start variable the same format as the azure.date.minutesFromNow. It looks like this: Fri Dec 13 2013 14:53:58
When I do this I am not able to see the image even from the browser, much less the img tag. Not sure if I am doing something wrong there...
UPDATE 4 - ANSWERED
Thanks to the glorious #MikeWo, I have the solution. The correct code is below:
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: azure.date.minutesFromNow(-5),
Expiry: azure.date.minutesFromNow(45),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
Mike was correct in that there seemed to be some sort of disconnect between the start time of the server and my localhost so I needed to set the start time in the past. In update 3 I was doing that, but Mike noticed that Azure does not allow the start and end time to be more than 60 minutes...so in update 3 I was doing -20 start and 60 end which is 80 minutes.
The new, successful way I have above makes the total difference 50 minutes and it works without any delay at all.
Thanks for taking the time Mike!
Short version: There is a bit of time drift that occurs in distributed systems, including in Azure. In the code that creates the SAS instead of doing a start time of Date.now(), set the start time to a minute or two in the past. Then you should be able to remove the delay.
Long version: The clock on the machine creating the signature and adding the Date.now might be a few seconds faster than the machines in BLOB storage. When the request to the URL is made immediately the BLOB service hasn't hit the "start time" yet of the BLOB and thus throws the 403. So, by setting the start time a few seconds in the past, or even the start of the current day if you want to cover a massive clock drift, you building in handling of the clock drift.
UPDATE: After some trial and error: Make sure that when creating an adhoc SAS it can't be longer than an hour. Setting the start time a few minutes in the past and then expiration 60 minutes in the future was too big. Making it a little in the past and then not quite an hour from then for expiration.