Gatsby+netlif+contentful bridge - contentful

I am trying to configure contentful webhook for auto deploy in netlify.
I am geting 404 during content changes.

Disclaimer: I work for Netlify.
This setup works well for many customers. I assume you have setup a separate build hook in the Build & Deploy settings page and are using it? You cannot use our automatic webhooks that trigger builds from GitHub/GitLab/BitBucket to trigger builds from other external systems like Contentful.
There is no authentication required and a 404 suggests to me a mistyped webhook address as we'll only return 404's when you try to visit something that doesn't exist.
Do make sure that:
your site is setup to build using our continuous deployment system. You can't trigger a site that we can't fetch via git, and only sites fetched via git can be built via our CD.
you use https
you POST (I assume this is the default for Contentful's outgoing hooks but if you can choose - POST is what you want)
your webhook host is api.netlify.com
and in general you use the exact hook address you get from our UI.
If that doesn't show an obvious typo, this is probably something you'll need to contact our Tech Support about, including information like your webhook address and the site you are attempting to trigger a build from.

Related

Is it possible to update parts/directives in the "content-security-policy" header using DeclarativeNetRequest API?

I am in the process of migrating from Manifest V2 to V3, from Web Request API to Declarative Net Request API. Using Web Request, I modify the "content-security-policy" header by adding a domain into the list of various directives (default-src, frame-src, etc). I tried using the "append" operation in the rule action. Is it possible to target a directive? What if the directive does not exist? Does append just add the supplied string to the end? With Web Request, I was able to examine each directive and update each accordingly, before returning the new value. This allowed me to inject a script that is needed into each frame.
Instead, would it be possible to continue to use the Web Request API with V3? In my setup, I have my chrome extension "Published - unlisted". I do use the force install option when deploying the extension to our internal users, and the only reason I have it unlisted and not private is so that the users who have the extension can get updated whenever a new version is released. Would it be possible to have users updated without having the extension listed? Perhaps by hosting the extension in my own server? Please advise on what can be done to have the ability to update the response header, specifically the "content-security-policy" header the way I have done before, and whether I can continue to use Web Request API going forward (using V3). In the Chrome dev website, there's a mention about continuing to use Web Request if force install is used, and only if its "deployed to a given domain or to trusted testers", but I'm not sure what that actually means. What would I need to do to meet the criteria?
I tried using the append operation in the rule action via the Declarative Net Request API, but its not working as expected. I dont see the security policy being updated when I inspect the response header in dev tools. I also get errors stating that many scripts, images, etc violate the security policy for websites that did not have one to begin with (My extension targets any website).

API Platform with alternative Runtime, Caddy, Vulcain, Cache ecosystem

Currently I'm investigating a setup backed by api-platform with the following goals:
the PHP backend MUST yield minimal resource payloads, thus I do not want to embed relations at all
the PHP backend SHOULD be able to run in alternative runtimes, e.g. Swoole
the webserver should push related resources via HTTP2 Push leveraging the built in vulcain support of the api-platform distribution
I cannot find that many resources about those setups - at least not in such a form that they answer subsequent questions sufficiently.
My starting setup was simply based on the api-platform distribution 2.6.8
So, until now I've learned the following things:
out of the box, the caddy + http2 push setup works with the PHP container being based on php:8.1-fpm-alpine - while caddy is obviously directly using php_fastcgi
when I was fooling around with the currently available cache-handler I was able to get the http cache working but I was struggling to find any information about cache invalidation works. The api-platform docs mostly focus on varnish; there is also only a VarnishPurger shipped in the api-platform core. Wring a custom one should not be that hard if the caddy cache-handler somehow allows BAN requests or something similar - where to find info about that? I see that the handler is based on Souin - but as unfamiliar as I am I have no clue how (and if) Souin supports cache invalidation after all.
when changing the php container to be (in my current testing scenario) based on Swoole then php_fastcgi cannot be used in caddy - instead, I ended up using reverse_proxy (as described in vulcain docs) which basically works and serves proper http responses but does not push any resources requested with Preload headers (as I said, it worked when the PHP backend was based on PHP-FPM). How can I debug what happens here? Caddy does not yield any info about the push handling - nor does the vulcain caddy module
Long story short(er): to sum up my questions
how can I figure out why caddy + vulcain is not working in a reverse_proxy setup?
is the current state of the caddy cache handler functional / supported by the api-platform distribution
how to implement/support BAN requests (or other fine grained cache invalidation) for caddy cache handler?
Souin supports the invalidation using the PURGE HTTP method. I already wrote a PR to set Souin in the api-platform/core project but they are busy with the v3.0 release. Maybe in a near future they'll review and probably merge it, I dunno. But if you use a decorator on the varnish purger and use the code I wrote in the PR, you'll be able to purge automatically the associated endpoints to the base route.

Testing Instagram Basic API locally

I followed with success the "first steps" guide here to test the Instagram API.
I did it as suggested in the docs with an heroku app.
Now that I obtained my access token, I would like to test this NodeJS Instagram private API on my local machine, without having to deploy on Heroku only for development purposes all the time I make changes.
In practice, I would like to test it with localhost, instead of myapp.herokuapp.com.
I thought to add a redirect OAuth URI like https://localhost:8443/auth/ in the section of the image below.
As it requires the URI to begin with HTTPS, I guess I have to enable it in my Express JS, as explained here.
Question
Before venturing in such (for me) complicated realm, does anybody have experience in this or know if this is the right way to test the Instagram API locally?
I was able to make it work with localhost, but it was very tedious.
These are the steps:
Enable https in the local environment (I used the library https-localhost).
[I don't know if this is mandatory] create a test app* from the main app (https://developers.facebook.com/docs/development/build-and-test/test-apps/)
Set the redirect OAuth URI to https://localhost:<MY_PORT>/auth/ and update also all other URIs in .../instagram-basic-display/basic-display/ settings.
Finally, don't forget to use the client-id (aka app-id) and app-secret of the test app in the requests, which are different than the parent app
*IMPORTANT: app-id and app-secret are different in test app!
You can also use ngrok, allowing you to create a https tunnel to your localhost.
It enables you to access your localhost via https over the internet by creating a public url for you (e.g https://xxxxxxx.ngrok.io/) accepted as valid URI by developer dashboard.
Also, no need to create a test app for this. Great tool for dev. IMHO.

OpenProject webhooks not fired

I'm trying to use the OpenProject's webhook to send workpackage's data to a third-party system, but it isn't even fired.
I did all my tests using this OP docker container from version 7.4.0 to 7.4.7 but none of them worked. In all of these images, the webhook is included and configured, then theoretically any additional setting isn't needed (except the webhook register in the web interface).
The passenger and the worker:jobs are running. There isn't clue in the log files. The webhook is enabled and set to trigger a http POST call in a localhost address.
Did anyone pass through a similar issue? I'm not a Ruby developer, and I wonder if some kind of daemon or service start is missing.
Our team figured out how to make the webhooks works. It's only needed check the Work package added and Work package updated options in the Email notifications group on System settings of the instance.
This is a undocumented setting, and I see it as a feature issue.
Nevertheless, it's working.

Deploying a test web app for each GitHub pull request

Is it possible for GitHub to trigger a new test deployment when a pull request is submitted? I'd like for it to create a new folder on the server (Azure preferred) so that a test URL (e.g. http://testserver.com/PR602/) is generated that we can refer to in the pull request.
This would allow anyone to test a pull request without having to clone the repo, check out the branch, and build it locally.
In my initial research I found that Travis CI can deploy all branches, but I'm not clear how this would be triggered. Do I have to write a custom app that's triggered by pull request web hooks? I'm hoping someone has discovered a simpler method.
Do I have to write a custom app that's triggered by pull request web hooks?
Yes, or find someone else who has happened to have written the exact webhook handler you need.
Writing a webhook handler isn't terribly much work. If you don't want to integrate it with your current app, you can use a micro-framework like Flask to do this in only a few lines of code.
Coming back to this in 2022, there is now also the option of Github Actions, which is a first-party CI service. Actions provides a framework for defining what things to do when certain triggers happen, and there's an extensive marketplace of drop-in components, so you may be able to do all of your triggering of other systems without writing any custom code or running a webserver to listen to webhooks.

Resources