Clear Azure Redis Cache - azure

Due to some issue where I need to clear all the cache data on the Redis cache hosted on the Azure.In other words by using the Azure portal.Not by using my application.One option I can think of is,Delete the Redis cache instance and recreate it.But do you know better way for doing that ? I'm using StackExchange.Redis.dll
Update 2 :
Could you tell me how to get public key in PEM format ? On the doc here it says this The easiest way to run this command in Windows - MSYS2.I don't have any idea about that.
Update 1 :
Could you tell me why this is happening when I use the redis-cli ?

For Azure's Redis service, the Azure portal has a built-in console (which is in Preview):
At this point, it's as simple as executing a flushall command:
If you're running Redis in, say, a VM, you'll need to use a tool to connect remotely to the cache and run the flushall command.

You can use
https://marketplace.visualstudio.com/items?itemName=gbnz.redis-cache-clear
which is a basic devops extension. Use '*' in the value, it will flush all redis data

Related

Ways to overcome the 230s hard coded limit to the SCM endpoint

Background
We have a PHP App Service and MySQL that is deployed using an Azure Devops Pipeline (YML). The content itself is a PHP site that it packaged up into a single file using Akeeba by an external supplier. The package is a Zip file (which can be deployed as a standard Zip deployment) and inside the Zip file is a huge JPA file. The JPA is essentially the whole web site plus database tables, settings, file renames and a ton of other stuff all rolled into one JPA file. Akeeba essentially unzips the files, copies them to the right places, does all the DB stuff and so on. To kick the process off, we can simply connect to a specific URL (web site + path) and run the PHP which does all the clever unpackaging via a web GUI. But, we want to include this stage in the pipeline instead so that the process is fully automated end to end. Akeeba has a CLI as an alternative to the Web GUI deployment, so it should go like this:
Create web app
Deploy the web site ZIP (zipDeploy)
Use the REST API to access Kudu and run the relevant command (php install.php web.jpa) to unpack the jpa and do the MySQL stuff - this normally takes well over 30 minutes (it is a big site and it has a lot of "stuff" to do - but, it does actually work in the end).
The problem is that the SCM REST API has a hard-coded 230s limit as described here: https://blog.headforcloud.com/2016/11/15/azure-app-service-hard-timeout-limit/
So, the unpack stage keeps throwing "Invoke-RestMethod : 500 - The request timed out" exactly on the 230s mark.
We have tried SCM_COMMAND_IDLE_TIMEOUT and WEBJOBS_IDLE_TIMEOUT but, unsurprisingly, they did not make any difference.
$cmd=#{"command"="php .\site\wwwroot\install.php .\site\wwwroot\web.jpa .\site\wwwroot"}
Invoke-RestMethod -Uri $url -Headers #{"Authorization"="Basic $creds"} -Body (ConvertTo-Json($cmd)) -Method Post -ContentType "application/json" -TimeoutSec 7200
I can think of a few hypothetical ways around it (some quite eccentric):
Find another way to run CLI commands inside the Web App after deployment other than the Kudu REST API. Is there such a thing? I Googled and checked SO but all I found were pointers to the way we do it (or try to do it) now.
Use something like Selenium to click the GUI buttons instead of using the CLI. (I do not know if they would suffer a timeout.)
Instead of running the command via Kudu REST, use the same API to create and deploy a script to the web server, start it and then let the REST API exit whilst the script still runs on the Web App. Essentially, bodge an async call but without the callback and then have the pipeline check in on the site at, say, 5 minute intervals. Clunky.
Extend the 230s limit - but I do not think that Microsoft make this possible.
Make the web site as fast as possible during the deployment in the hope of getting it under the 4-minute mark and then down-scale it. Yuk!
See what the Akeeba JPA unpacking actually does, unpack it pre-deployment and do what the unpackage process does but controlled via the Pipeline. This is potentially a lot of work and would lose the support of the supplier.
Give up on an automated deployment. That would rather defeat much of the purpose of a Devops pipeline.
Try AWS + terraform instead. That's not a approved infrastructure environment, however.
Given that Microsoft understandably do not want long-running API calls hanging around, I understand why the limit exists. However, I would expect therefore there to be a mechanism to interact with an App Service file system via a CLI in another way. Does anyone know how?
The 4 minute idle timeout on the TCP level and this is implemented on the Azure hardware load balancer. This timeout is not configurable and this cannot be changed. One thing I want to mention is that this is idle timeout at the TCP level which means that if the connection is idle only and no data transfer happening, only then this timeout is hit. To provide more info, this will hit if the web application got the request and kept processing the request for > 4minutes without sending any data back.
Resolution
Ideally in a web application, it is not good to keep the underlying HTTP request open and 4 minutes is a decent amount of time. If you have a requirement about background processing within your web application, then the recommended solution is to use Azure WebJobs and have the Azure Webapp interact with the Azure Webjob to notify once the background processing is done (there are many ways that Azure provides like queues triggers etc. and you can choose the method that suits you the best). Azure Webjobs are designed for background processing and you can do as much background processing as you want within them. I am sharing a few articles that talk about webjobs in detail
· http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx
· https://azure.microsoft.com/en-us/documentation/articles/websites-webjobs-resources/
============================================================================
It totally depends on the app. Message Queue comes to mind. There are a lot of potential solutions and it will be up to you to decide.
============================================================================
Option #1)
You can change the code to send some sort of header to continue to the client to keep the session open.
A sample is shown here
This shows the HTTP Headers with the Expect 100-continue header:
https://msdn.microsoft.com/en-us/library/aa287673%28v=vs.71%29.aspx?f=255&MSPPError=-2147217396
This shows how to add a Header to the collection:
https://msdn.microsoft.com/en-us/library/aa287502(v=vs.71).aspx
Option #2) Progress bar
This sample shows how to use a progress bar:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/d84f4c89-ebbf-44d3-bc4e-43525ae1df45/how-to-increase-progressbar-when-i-running-havey-query-in-sql-server-or-oracle-?forum=csharpgeneral
Option #3) A common practice to keep the connection active for a longer period is to use TCP Keep-alive. Packets are sent when no activity is detected on the connection. By keeping on-going network activity, the idle timeout value is never hit and the connection is maintained for a long period
Option #4) You can also try the option of hosting your application as an IaaS VM instead of a APP SERVICE. This may avoid the ARR timeout issue because its architecture is different and I believe that the time-out is configurable.

Persist data for 24 hours

I need to build a microservice that scrapes a message once a day and persists it somewhere. It does not need to be accessible after 24 hours (it can be deleted). It doesn't really matter where or how, but I need to access it from an Express.js endpoint and return the message. Currently we use Redis and MongoDB for data persistence. It feels wrong to create a whole collection for one tiny service, and I'm not sure of an application of Redis that would fulfill this task. What's my best option? Open to any suggestions, thank you!
You can use YUGABYTE DB, and you can set TABLE LIVE= 24 Hours, then data will be deleted.
Redis provide an expiration mechanism out of the box. You can associate a timeout to a key, and it will be automatically deleted after the timeout has expired. Some official documentation here
Redis also provides logical databases, if you want to keep this expiring keys separated from the rest of your application. So you do not need to spin up another machine. Some official documentation here

How do I automate redis flushall when it runs out of memory?

My redis db runs out of memory after 2-3 day, I do flush manually. But, I want to fushall automatically in nodejs.
Read about Redis' maxmemory-policy and then choose one of the allkeys-* policies.
you should look why it happens at first place, maybe you could expire keys using tls check this link
Other option would be just set a cronjob to flush redis once in a while(not recommended)

Forcing a Redis snapshot / persistence vis SAVE command?

I am using the ioredis library for Node.js - I am wondering how to send Redis a signal to force persistence. I am having a hard time finding out how to do this. The SAVE command seems to do this, but I can't verify that. Can anyone tell me for sure if the SAVE command will tell Redis to write everything in memory to disk on command?
this article hints at it:
https://community.nodebb.org/topic/932/redis-useful-info so does this
one: http://redis.io/commands/save
The answer is yes, SAVE will do the job for you, but it has a synchronous behaviour, means it will be blocking till the saving is done not letting other clients retrieve data. as shown in the docs:
You almost never want to call SAVE in production environments where it
will block all the other clients
The better solution is described in BGSAVE , you can call BGSAVE and then check for the command LASTSAVE which will return for you the timestamp of the latest snapshot taken from the instance. http://redis.io/commands/lastsave

Chef chef-validator.pem security

Hi I am setting up a cluster of machines using chef at offsite locations. If one of these machines was stolen, what damage can the attacker do to my chef-server or other nodes by having possession of chef-validator.pem ? What other things can they access through chef? Thanks!
This was one of the items discussed at a recent Foodfight episode on managing "secrets" in chef. Highly recommended watching:
http://foodfightshow.org/2013/07/secret-chef.html
The knife bootstrap operation uploads this key when initializing new chef clients. Possession of this key enables the client to register itself against your chef server. That is actually its only function, once the client is up and running the validation key is no longer needed.
But it can be abused.... As #cbl has pointed out, if an unauthorized 3rd party gets access to this key they can create new clients that can see everything on your chef server that normal clients can see. It can theoretically be used to create a Denial of Service attack on your chef server, by flooding it with registration requests.
The foodfight panel recommend a simple solution. Enable the chef-client cookbook on all nodes. It contains a "delete_validation" recipe that will remove the validation key and reduce your risk exposure.
The validator key is used to create new clients on the Chef Server.
Once the attacker gets hold of it, he can pretend he's a node in your infrastructure and have access to the same information any node has.
If you have sensitive information in an unencrypted data bag, for example, he'll have access to that.
Basically he'll be able to run any recipe from any cookbook, do searches (and have access to all your other nodes' attributes), read data bags, etc.
Keep that in mind when writing cookbooks and populating the other objects in the server. You could also somehow monitor the chef server for any suspicious client creation activity, and if you have any reason believe that the validator key has been stolen, revoke it and issue a new one.
It's probably a good idea to rotate the key periodically as well.
As of Chef 12.2.0 the validation key is no longer required:
https://blog.chef.io/2015/04/16/validatorless-bootstraps/
You can delete your validation key on your workstation and then knife will use your user credentials to create the node and client.
There's also some other nice features of this since whatever you supply for the run_list and environment is also applied to the node when it is created. No more relying on the first-boot.json file to be read by the chef-client and the run having to complete before the node.save creates the node at the end of the bootstrapping process.
Basically, chef-client uses 2 mode authentication for to the server :-
1) organization validator.pem and
2) user.pem
Unless and until there is the correct combination of these 2 keys. chef-client wont be able to authenticate with the chef server.
They can even connect any node to the chef server with the stolen key via the following steps.
Copying and pasting the validator key into /etc/chef folder on any machine
Creating client.rb file with the following details
log_location STDOUT
chef_server_url "https://api.chef.io/organizations/ORGNAME"
validation_client_name 'ORGNAME-validator'
validation_key '/etc/chef/validater.pem'
3: Run chef-client to connect to the chef server

Resources