Origen remotes and HTTP files - origen-sdk

We use remotes extensively and the request came about how Origen handles straight links to HTTP files versus revision control systems. Can Origen remotes just put an HTTP link as the vault?
Origen.config.remotes do
[
{
dir: "mydir",
vault: "http://mycompany/fileserver/myspreadsheet.xlsx"
}
]
end

This is not currently supported, but should not be too difficult for you to add.
The remotes system uses the Origen revision control API under the hood and passes the given rc_url: (or vault: as in this example, which is an alias) option to the revision control API to deal with - https://github.com/Origen-SDK/origen/blob/master/lib/origen/remote_manager.rb#L140
So, if you were to add an http(s) revision control driver to Origen, then http remote urls should work - https://github.com/Origen-SDK/origen/blob/master/lib/origen/revision_control.rb
Note that for such an http revision control driver, you would not need to support all the API (commit, etc), just the remotes_method which is defined as checkout by default, but could actually be anything you like, e.g. get might be more appropriate for a fetch over http - https://github.com/Origen-SDK/origen/blob/921248e1e8514f28284ff7fca74f9ccf2243d061/lib/origen/revision_control/base.rb#L32

Related

API Platform with alternative Runtime, Caddy, Vulcain, Cache ecosystem

Currently I'm investigating a setup backed by api-platform with the following goals:
the PHP backend MUST yield minimal resource payloads, thus I do not want to embed relations at all
the PHP backend SHOULD be able to run in alternative runtimes, e.g. Swoole
the webserver should push related resources via HTTP2 Push leveraging the built in vulcain support of the api-platform distribution
I cannot find that many resources about those setups - at least not in such a form that they answer subsequent questions sufficiently.
My starting setup was simply based on the api-platform distribution 2.6.8
So, until now I've learned the following things:
out of the box, the caddy + http2 push setup works with the PHP container being based on php:8.1-fpm-alpine - while caddy is obviously directly using php_fastcgi
when I was fooling around with the currently available cache-handler I was able to get the http cache working but I was struggling to find any information about cache invalidation works. The api-platform docs mostly focus on varnish; there is also only a VarnishPurger shipped in the api-platform core. Wring a custom one should not be that hard if the caddy cache-handler somehow allows BAN requests or something similar - where to find info about that? I see that the handler is based on Souin - but as unfamiliar as I am I have no clue how (and if) Souin supports cache invalidation after all.
when changing the php container to be (in my current testing scenario) based on Swoole then php_fastcgi cannot be used in caddy - instead, I ended up using reverse_proxy (as described in vulcain docs) which basically works and serves proper http responses but does not push any resources requested with Preload headers (as I said, it worked when the PHP backend was based on PHP-FPM). How can I debug what happens here? Caddy does not yield any info about the push handling - nor does the vulcain caddy module
Long story short(er): to sum up my questions
how can I figure out why caddy + vulcain is not working in a reverse_proxy setup?
is the current state of the caddy cache handler functional / supported by the api-platform distribution
how to implement/support BAN requests (or other fine grained cache invalidation) for caddy cache handler?
Souin supports the invalidation using the PURGE HTTP method. I already wrote a PR to set Souin in the api-platform/core project but they are busy with the v3.0 release. Maybe in a near future they'll review and probably merge it, I dunno. But if you use a decorator on the varnish purger and use the code I wrote in the PR, you'll be able to purge automatically the associated endpoints to the base route.

How to list files from enterprise GitHub repository without cloning

I am trying to list the files present on an enterprise GitHub repository. I tried using GitHub API but base URL for it is
api.github.com where as base URL would be different for me and when I put that URL it says couldn't resolve. How can I achieve it for a URL in below format
https://company.net/org/reponame
Below example works for me but the URL that I have it doesn't work
curl -k https://api.github.com/user/repo?refs=master
For an on-premise GitHub Enterprise, the documentation mentions
REST API endpoints — except Management Console API endpoints — are prefixed with the following URL:
http(s)://[hostname]/api/v3
In your case: https://company.net/api/v3
From there, using the Git Database API -- list tree, you can list all files, as illustrated here.

Gatsby+netlif+contentful bridge

I am trying to configure contentful webhook for auto deploy in netlify.
I am geting 404 during content changes.
Disclaimer: I work for Netlify.
This setup works well for many customers. I assume you have setup a separate build hook in the Build & Deploy settings page and are using it? You cannot use our automatic webhooks that trigger builds from GitHub/GitLab/BitBucket to trigger builds from other external systems like Contentful.
There is no authentication required and a 404 suggests to me a mistyped webhook address as we'll only return 404's when you try to visit something that doesn't exist.
Do make sure that:
your site is setup to build using our continuous deployment system. You can't trigger a site that we can't fetch via git, and only sites fetched via git can be built via our CD.
you use https
you POST (I assume this is the default for Contentful's outgoing hooks but if you can choose - POST is what you want)
your webhook host is api.netlify.com
and in general you use the exact hook address you get from our UI.
If that doesn't show an obvious typo, this is probably something you'll need to contact our Tech Support about, including information like your webhook address and the site you are attempting to trigger a build from.

How to update a fork from it's original via the Github API

I've created a fork of a github repository via the github API. Now, later on, I want to pull any updates from the origin repository into the fork. This should always be a fast-forward in my use case. I have read access to the origin repository and read-write to the fork.
I thought of maybe creating a Pull Request then accepting (both of which you can do via the API) but this creates noise (Pull Requests being created and destroyed) and just doesn't seem right.
Is there any way to do this via the API?
I don't have the inside scoop on this, so this might be a miss-feature that will be removed at some point. Until then:
Github makes available all commits across (I assume) the entire fork network; So APIs that accept commit hashes will be happy to work on hashes from the upstream, or across other forks (This is explicitly documented for repos/commits/compare and creating a pull requst).
So there are a couple of ways to update via APIs only:
Using Git data api: This will usually be the best option, if you don't change your fork's master.
Get upstream ref /repos/upstream/repo/git/refs/heads/master, and get the hash from it
Update your fork PATCH /repos/my/repo/git/refs/heads/master with the same hash.
Using a higher-level merge api: This will create a merge commit, which some people like.
Get the upstream ref like before
Create a merge to branch master in your repo.
Pull-request to yourself and merge it via api: This will end up creating not only a merge commit, but a PR as well.
Create PR: POST to /repos/your/repo/pulls with head = "upstream:master"
Get the PR url from the response,
Merge it: PUT to /repos/your/repo/pulls/number/merge
It's possible that the "upstream:master" notation would also work for options 1 & 2, saving an API call.
Not possible currently, but I've gone ahead and added that to our API wishlist. :)
This that work for me, because I needed update from upstream but without a merge request commit. My ref is master.
Create a pull request POST /repos/:myUsername/:myRepo/pulls
INPUT: {title, head: 'ownerFromUpStream:master', base: 'master', ...}
Get sha from pull request (ex. response.data.head.sha)
PATCH /repos/:myUsername/:myRepo/git/refs/master
PARAMS: {sha: shaFromPullRequest}
DOC.
Update ref
Create pull request
This is now possible in the GitHub API; documentation here, and announcement here.
In summary, make a POST request to /repos/{owner}/{repo}/merge-upstream with the proper authentication and the payload of { "branch": "branch-name" }.

Browser plugin which can register its own protocol

I need to implement a browser plugin which can register its own protocol (like someprotocol://someurl) and be able to handle calls to this protocol (like user clicking on 'someprotocol' link calls function inside my plugin). As far as I understand, Skype does something similar, except I need to handle links within page context and not in a separate app. Any advice on how this can be done? Can this be done without installing my own plugin, with the help of flash/java?
Things are going to be slightly more complicated than you think.
You're going to have to create an entire application, not just a browser plugin (that plugin can be part of your application). The reason I consider it to be a complete application is that you're going to need to modify registry settings on the client machine to register your custom URL handler.
Here's an MSDN article describing exactly what you have to do to register the custom URL handler on a Windows client:
Registering an Application to a URL Protocol

Resources