I'm working on terraform-provider-ovh where I need to reload configuration only after all the other resources performed their changes.
I can't issue refresh after every resource makes it's own changes as it creates async job and would result in conflicts, plus reload of configuration is a very long operation. I can't find any way to trigger a "handler", "async operation" or some kind of provider scoped post-processing.
My current idea is to create a dedicated resource for "refresh" which can call api to see if there were any changes made and trigger refresh if there are any waiting changes. The problem is that it also needs to be triggered after all the others are done, and I would really want to avoid requiring user to explicitly define depends_on pointing to all resources that might trigger the refresh.
Any ideas for how to solve this reasonable would be very welcome.
Related
So, if you have been working for some time with Terraform, probably you have faced this error message more than once:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value:
Usually, this is caused because something was screwed up during a destroy operation, and now there is a mismatch between the state and the lock.
The known solution for this is to delete the lock from DynamoDB and run terraform init again. And if it didn't resolve with that, also delete the tfstate from S3, which at this point doesn't have any data as the infrastructure was destroyed.
Surprisingly, neither is working now, and I don't have a clue why. There is no tfstate in the bucket, in fact, I even deleted every old version stored (bucket has versioning enabled). There is no lock in DynamoDB either.
Changing the tfstate name works without issues, but I can't change it as it would break the naming convention I'm using for all my tfstates.
So, any ideas what's going on here? On Friday, the infrastructure was deployed and destroyed without issues (as part of the destroy, I always check there is no lock left behind and delete the tfstate from S3). But today, I'm facing this error, and it's been a while already and can't figure it out..
Holy ****!
So, it turns out DynamoDB was being a serious troll here.
I searched for the key of the md5 of this tfstate, and it wasn't returned. But then I noticed there was a message about that there were more items to be returned... After clicking like 6 times on that button, it eventually returned a hidden lock for this tfstate.
Deleted it, and everything is back to normal again.
So, as summary, if you ever face this issue and you can't find the lock on DynamoDB... be sure that all items are returned in the query, as it can take many attempts to return them all.
I'm seeing some interesting behavior on Azure App Service that I'm hoping somebody will be kind enough to comment on.
Reproduction steps (all Azure steps can be done in the portal):
Create a new Web App in App Service (Standard pricing level, single instance is fine), e.g. mysite
Create a new staging slot for that App, e.g. mysite-staging
Deploy a bare-bones ASP.NET app to mysite with a file /scripts/test.js that has the content //ONE
Deploy a bare-bones ASP.NET app to mysite-staging with a file /scripts/test.js that has the content //TWO
Swap the deployment slots
Immediately after the swap starts, navigate to mysite.azurewebsites.net/scripts/test.js and monitor the returned content during the swap operation (by continually doing a force-refresh in the browser)
What I would expect to see:
At some point during the swap, the content changes seamlessly/consistently/irreversibly from //ONE to //TWO
What I actually see:
During the swap operation, the content "flickers"/"bounces" between //ONE and //TWO. After the swap operation is complete, the behavior is stable and //TWO is consistently returned
The observed behavior suggests that there is no single point in time at which all traffic can be said to be going to the new version.
The reason this concerns me is the following scenario:
A user requests a page mysite.azurewebsites.net which, during this "bouncing" stage, responds with the "v2" version of the page with a link to a CDN-hosted script mycdn.com/scripts/test.js?v2 (the ?v2 is a new query string)
The browser requests the script from the CDN, which in turn requests the script from mysite.azurewebsites.net. This time, the "bouncing" causes the response to be the v1 version of the script.
Now we have a v1 version of the script cached in the CDN, which all users in that region will load with the v2 version of the page
My question: Is this "bouncing" behavior during a swap operation "by design"? If so, what is the recommended approach for solving the pathological case above?
The behavior you've described is currently by design. When we perform the swap we update the mappings between hostnames and the sites in our database but our frontend instances cache those mappings and refresh them every 30 seconds. So the "bouncing" period may last up to 30 seconds.
I do not have at the moment a good recommendation on how to solve the case, but will look into possible ways to address this.
I'm using Iron Router (with RouteControllers) and I'd like to know if meteor keep cache for "publishes" when page (url) change.
Example :
I want use meteor for a cooking site, so I've a section with a BIG list of recipes, and I can filter this list (by theme, preparation time, etc.). So, potentially, there will be a lot of different lists.
(This is a use case but my question can be valid for classic schema : a user visits a recipe detail page, and go away... does meteor clean cache for this subscription on server (which published the recipe datas) ?)
If I use subscriptions, does meteor keep cache when I change filter information ? And if not, how to do that without keep cache on local user database (and on server) for each request use can make ?
Sorry, I'm a beginner in meteor and it's a little confused for me. When I read documentation about meteor and publish/subscribes, I think that my app usage will increase memory excessively...
There is multiple scenarios to take into consideration:
The user closes the page and re-opens it, or refreshes.
In that case, no subscription whatsoever is natively kept.
The user changes page with a router (no reload or page closing), templates are destroyed
If the publication is done inside the router controls, it's generally cancelled (not kept) on page change. I think this is valid for both iron:router and meteorhacks:flow-router.
If the publication is done inside the template control, it is cancelled on destruction.
Else if it is done outside these pre-defined controls then the subscription is not cancelled.
You will need to adapt to these behaviours. If you want, for example, to remember the subscriptions across router pages, you will need to store them externally and control them in your own way.
afaik the cache is client-side, in minimongo. The publication on the server isn't actually used until you subscribe to it on the client. i.e.:
Meteor.publish('allRecipes',function(){
return Recipes.find();
});
Doesn't do anything by itself. A client subscription needs to refer to it.
If your collection of recipes is very large and you don't want to have a lot of network overhead to move it all to the client, then you can implement server-side search in your subscription, for example with https://atmospherejs.com/meteorhacks/search-source
We have a WEBAPI service running on a windows asp.net MVC solution. There is a load method that takes about 40 minutes to complete and return status on the called page. During that time the browser window is tied up. What design options do we have if we want the web page to come back with submitted and the process to continue to run and complete. I don't care if page never shows complete, we can pull that from another status page.
I've done something similar in the past, even though in my case the delay was shorter - 40-50 seconds of loading of fresh data from multiple backend servers in a VPN. It was also in ASP.NET back then, but I believe that the approach is still feasible and you can get some ideas if I share my experience. I remember an old thread that I had favourited in the past and used the insight from it. You can check it out.
Here are some tips, but in short, because I don't remember the details anymore (excuse my google-assisted memory!):
You should start the task in a new thread and not wait for it in your main thread.
You should also make sure that the task is started only once and cannot be initiated infinite number of times by the user via refresh or via the UI. So, you better persist the state in the database, so at refresh, the new thread is created only if the database says that it has not been executed recently or it is not in progress.
Your page will be loaded and show its contents and you can display a .gif representing a progress bar, a loading wheel or something similar to the user.
The task you started will continue on the server. When it completes you can push and update the UI via ajax from within the code-behind to make the experience even smoother if you like.
On subsequent requests, you can just retrieve the state of your task from the database in order to display something like update completed at hh:mm:ss.
Hope this helps you and I wish you the best of luck!
What I mean is a kind of event or callback which is called when some cached value is expiring. Supposedly this callback should be given the currenlty cached value, for example, to store it somewhere else apart from caching.
To find such a way, I have reviewed Notifications option, but they look like they are applicable for explicit actions with cache like adding or removing, whereas expiration is a kind of thing that occurs implicitly. I found out that none of these callbacks is not called many minutes after cache value has expired and has become null, while it is called normally within polling interval if I call DataCache.Remove explicitly (wrong, see update below).
I find this behavior strange as ASP.Net has such callback. You can even find an explanation how to utilize it here on SO.
Also, I tried DataCache Events. It is writtent in MSDN that literally
This API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
Nevertheless I created a handler for these event to see if I can test its args like CacheOperationStartedEventArgs.OperationType == CacheOperationType.ClearCache but it seemed to be in vain.
At the moment, I started to think about workarounds of this issue of the lack of required callback. So suggestions how to implement them are welcome too.
UPDATE. After more attentive and patient testing I found out that notification with DataCacheOperations.ReplaceItem is sent after expiration. Regrettably, I did not find the way to get the value that was cached before the expiration had occurred.