Azure Redis Cache + WordPress On Azure - azure

I've followed the instructions in this post to configure this but redis status is always showing not connected. I tried to turn on diagnostics on redis cache instance but it doesn't even seem like any requests are getting to the service itself. Any ideas?

After some more investigation it looks like there is a thread that covers the majority of the scenarios on the plug in discussions page... I would read through this and ensure you have things in the properly place as well...
https://wordpress.org/support/topic/enabling-with-predis-and-remote-redis

HTTPS is not supported by the predis library, you must use the non HTTPS endpoint which is 6379.

you can use https for redis cache and wordpress. You need to set it to set WP_REDIS_SCHEME to use TLS. See: https://cloud.accigo.se/blog/how-to-set-up-azure-redis-cache-for-wordpress/

Sometimes after editing the wp-config.php file, when you click on diagnostics, it will show you Redis: not found, but you may need to wait a little and try to Enable Redis object and disable Redis Object few time to get the updated information from the wp-config file.
I have put together a blog post which list down detailed steps to configure Redis Cache with WordPress on Azure.
I hope this will help.

Related

API Platform with alternative Runtime, Caddy, Vulcain, Cache ecosystem

Currently I'm investigating a setup backed by api-platform with the following goals:
the PHP backend MUST yield minimal resource payloads, thus I do not want to embed relations at all
the PHP backend SHOULD be able to run in alternative runtimes, e.g. Swoole
the webserver should push related resources via HTTP2 Push leveraging the built in vulcain support of the api-platform distribution
I cannot find that many resources about those setups - at least not in such a form that they answer subsequent questions sufficiently.
My starting setup was simply based on the api-platform distribution 2.6.8
So, until now I've learned the following things:
out of the box, the caddy + http2 push setup works with the PHP container being based on php:8.1-fpm-alpine - while caddy is obviously directly using php_fastcgi
when I was fooling around with the currently available cache-handler I was able to get the http cache working but I was struggling to find any information about cache invalidation works. The api-platform docs mostly focus on varnish; there is also only a VarnishPurger shipped in the api-platform core. Wring a custom one should not be that hard if the caddy cache-handler somehow allows BAN requests or something similar - where to find info about that? I see that the handler is based on Souin - but as unfamiliar as I am I have no clue how (and if) Souin supports cache invalidation after all.
when changing the php container to be (in my current testing scenario) based on Swoole then php_fastcgi cannot be used in caddy - instead, I ended up using reverse_proxy (as described in vulcain docs) which basically works and serves proper http responses but does not push any resources requested with Preload headers (as I said, it worked when the PHP backend was based on PHP-FPM). How can I debug what happens here? Caddy does not yield any info about the push handling - nor does the vulcain caddy module
Long story short(er): to sum up my questions
how can I figure out why caddy + vulcain is not working in a reverse_proxy setup?
is the current state of the caddy cache handler functional / supported by the api-platform distribution
how to implement/support BAN requests (or other fine grained cache invalidation) for caddy cache handler?
Souin supports the invalidation using the PURGE HTTP method. I already wrote a PR to set Souin in the api-platform/core project but they are busy with the v3.0 release. Maybe in a near future they'll review and probably merge it, I dunno. But if you use a decorator on the varnish purger and use the code I wrote in the PR, you'll be able to purge automatically the associated endpoints to the base route.

OpenProject webhooks not fired

I'm trying to use the OpenProject's webhook to send workpackage's data to a third-party system, but it isn't even fired.
I did all my tests using this OP docker container from version 7.4.0 to 7.4.7 but none of them worked. In all of these images, the webhook is included and configured, then theoretically any additional setting isn't needed (except the webhook register in the web interface).
The passenger and the worker:jobs are running. There isn't clue in the log files. The webhook is enabled and set to trigger a http POST call in a localhost address.
Did anyone pass through a similar issue? I'm not a Ruby developer, and I wonder if some kind of daemon or service start is missing.
Our team figured out how to make the webhooks works. It's only needed check the Work package added and Work package updated options in the Email notifications group on System settings of the instance.
This is a undocumented setting, and I see it as a feature issue.
Nevertheless, it's working.

How to set SSL versions in script when there are multiple URLs in a concurrent group of requests in a Single script?

There are 2 different URLs in a script that I have recorded and each use a different version of SSL. There is a concurrent group inside the script which has requests with both the URLs. How do I set the SSL version for them without removing the concurrency part?
I have tried using WinInet mode for replay which solved the issue. But I need to measure the response time for each URL and I cannot achieve it using WinInet mode as it doesn't generate the Web Page Diagnostics graph.
I've also tried creating automatic transactions but I couldn't see any of them in the results summary.
If you have access to the servers involved, then enable the time-taken HTTP log field. If you are running IIS, which is a good chance with WinInet, then the default log model for IIS will give you what you need.
At the conclusion of your tests, pull the logs. Use Microsoft logparser (staying with the Microsoft theme), to pull the min, max, avg time-taken values, grouped by request and filtered on the IP addresses of your load generators.
it would be interesting to know which SSL versions you have.
the following function let you set the SSL version before the URL call:
web_set_sockets_option("SSL_VERSION", "put your tls version here");
accepted are TLS1, TLS1.1, TLS1.2 and more.
See the Help hitting F1 on the function to get more information.

Deploying my front end and detecting client location by IP address - which AWS service should handle this? Confused by my options

I'm still new to AWS and just following the documentation and asking questions here when I get stuck. Please excuse me if this question sounds really noobish.
So far, I've deployed the following:
EB to deploy my REST API
RDS to deploy my psql database
Lambda functions to handle things like authentication & sending JWTs, uploading images to S3, etc.
I have got my basic back end (no caching (just started learning about redis), etc. set up yet, just the bare bones so far) deployed.
I'm still developing my front end, and have not even thought about how I will be deploying it yet (probably another deployment on EB, since I am using universal react). I am just developing it locally but using my production env variables now so I am hitting my deployed API, etc.
One of the MAJOR things I have no idea on how to do is detecting incoming requests from client side to get the client's location by IP address. This is so that I can return the INITIAL results in your general location just like yelp, foursquare, etc. do when you go to to their sites.
For now, I am just building a web app on desktop so I just want to worry about getting the IP address to get the general area of the user. My use case is something similar to other sites you might have used which provides an INITIAL result set for things in your area (think foursquare or yelp).
Here are my questions:
What would be a good way to do this? I'm thinking of handling this in my front end react universal deployment since it will be a node server with rendered page caching. Is this a terrible idea? It would work something like
(1) request from client comes in
(2) get IP from request and lookup the IP location using some service (still not sure what I'm going to use, have found a few plus a nodejs library called node-geoip). Preferably, I can get the zip code since I am trying to save having to do so many queries by unique locations in my database, and instead return results in the zip code and the front end will show an initial map with the initial results in that zip code.
(3) return to client the rendered page with those location params if it exists, otherwise create it, send it, and cache it.
Is the above a really dumb idea? Maybe you have already done something like this, and could share your wisdom :)
Is there an AWS service which can already handle something like this for me? Perhaps there's some functionality which can already do this.
Thanks.
AGAIN - I apologize if this is long winded. I don't know anyone in real life who can help me and I feel alone :(. I appreciate the help you guys can provide.
There are two parts to this:
Getting the user's IP address. You mentioned you're using 'EB' - I presume you mean AWS ELB (Elastic Load Balancer)? If so, then you need to read the X-Forwarded-For HTTP header in your app code, since otherwise what you'll really detect is the ELB's IP address. X-Forwarded-For contains the user's real IP - or rather, the IP of the end-connection being made (there's no telling if this is really a VPN, Proxy or something else-- but it's as far as you can get with an IP.)
Querying an IP DB that can turn the addr into a location object. There are tons of libraries for you. Assuming you're using Node, you can use node-geoip as you mentioned. Or you can just search 'geoip service' on Google and find managed services, like Telize on Mashape. If you don't want to manage the DB lookup yourself or keep the thing up to date, then a managed service would help.
In either case, it's likely that you'll be doing asynchronous look-ups. In that case, you might want to use async/await to get the user's full object before injecting that into your React props and ultimately rendering it as a HTML string that's sent down to the client.
You could also use a library like redial to decorate your components with data requirements, and return a Promise you can await on to know when you're okay to render.
Since you probably want to enable client routing too (i.e. where the user can click on a route in their browser, and the server isn't touched at all), then you will probably need some way to retrieve the IP address/results based on that IP even when the server isn't involved in the initial render.
For that, you could write a REST service that retrieves the results. Or write a GraphQL back-end that gets the data. It doesn't matter how you write it, since the server will have access to the X-Forwarded-For header and can use that to retrieve the results and send back location-aware data.
FYI, I'm writing a React starter kit (called ReactNow) that uses rxjs for handling async streams. It's not ready yet, but it might help you figure out the code layout that would offer a balanced mix between rendering on the server, and writing universal code that requires some heavy lifting from the server.

Could not retrieve the CDN endpoints in subscription with ID

Searched Google and so - no luck.
Just got this message in Azure for 3 CDN endpoints.
There seems no way to know what is going on without MS support. It is a test account and I do not recall setting this. I have been through similar obfuscated MS error messages only to discover that Azure had crashed.
What does it mean?
This isn't really a direct answer, but could help with the general problem of "what happens if the CDN goes down?".
There is a recent development called the "Progressive Web App".
Basically unless served by localhost, everything has to be over https, but script is cached as a local application in your browser.
When your app makes requests to the registered domain, these are intercepted by a callback you put in your serviceWorker.js, so you can cache even application data locally, and sync the local data occasionally with the server (or on receive events if you're using webSockets).
Since the Service Worker intercepts REST calls to the registered domain, this in theory makes it fairly easy to add to just about any framework.
https://developers.google.com/web/fundamentals/getting-started/codelabs/your-first-pwapp/
Sometimes there is a (global) problem with the CDN. It happend before.
You can check the azure CDN status on this page: https://azure.microsoft.com/en-us/status/
At this moment everything looks good, you still have problems?

Resources