After upgrading from 0.12.2 to 0.13.5 a connectivity issue came up with the graphql component. Prefect server is running in a different server but the graphql url remains http://localhost:4200/graphql. server.ui.graphql_url was working great with version 0.12.2 but now I can't find any way to configure the graphql url properly.
Below you will find the config.toml:
$ cat ~/.prefect/config.toml
[logging]
level = "INFO"
[api]
url = "http://192.168.40.180:4200"
[server.database]
host_port = "6543"
[context.secrets]
SLACK_WEBHOOK_URL = 'https://hooks.slack.com/services/xx/XX/Xx'
[server.ui]
graphql_url = "http://192.168.40.180:4200/graphql"
In the image you can see a POC of the case.
I'm a lit bit confused about the old and the new way to configure the prefect server. Have you any idea about this issue?
EDIT: The ticket I mentioned below has been closed; when 0.13.9 is released, it'll contain a new runtime config apollo_url (which is more accurate since that's the container we're looking for anyway), which is inserted into a static settings file in the UI build, which is fetched when the application starts. This should hit all the points mentioned below.
This is a change from Prefect Server ^0.13.0, which removed the graphql_url variable as a configurable environment variable.
The previous version of Server used a find-replace on the UI code, which is compiled and minified at image build time. The reason for this is that it moves the burden of installing the required Node modules and building the application away from client-side installations and onto Prefect at release-time instead, since these can take a long time (10+ minutes each) in containerized environments. However, the downside is that modifying environment variables, which are injected at build time, requires a less than desirable lookup of the previously-injected variables, which means modifying these requires pulling a new image.
We chose to ship the new version with an in-app input, which allows changing the Server endpoint at browser run-time instead. This gives the flexibility of a single UI instance to connect to any accessible Server installation, leveraging local storage to persist this setting between browser sessions.
That said, we have a ticket open to re-expose the default configuration in a better way than in the previous version. You can follow that ticket here.
Related
Currently I'm investigating a setup backed by api-platform with the following goals:
the PHP backend MUST yield minimal resource payloads, thus I do not want to embed relations at all
the PHP backend SHOULD be able to run in alternative runtimes, e.g. Swoole
the webserver should push related resources via HTTP2 Push leveraging the built in vulcain support of the api-platform distribution
I cannot find that many resources about those setups - at least not in such a form that they answer subsequent questions sufficiently.
My starting setup was simply based on the api-platform distribution 2.6.8
So, until now I've learned the following things:
out of the box, the caddy + http2 push setup works with the PHP container being based on php:8.1-fpm-alpine - while caddy is obviously directly using php_fastcgi
when I was fooling around with the currently available cache-handler I was able to get the http cache working but I was struggling to find any information about cache invalidation works. The api-platform docs mostly focus on varnish; there is also only a VarnishPurger shipped in the api-platform core. Wring a custom one should not be that hard if the caddy cache-handler somehow allows BAN requests or something similar - where to find info about that? I see that the handler is based on Souin - but as unfamiliar as I am I have no clue how (and if) Souin supports cache invalidation after all.
when changing the php container to be (in my current testing scenario) based on Swoole then php_fastcgi cannot be used in caddy - instead, I ended up using reverse_proxy (as described in vulcain docs) which basically works and serves proper http responses but does not push any resources requested with Preload headers (as I said, it worked when the PHP backend was based on PHP-FPM). How can I debug what happens here? Caddy does not yield any info about the push handling - nor does the vulcain caddy module
Long story short(er): to sum up my questions
how can I figure out why caddy + vulcain is not working in a reverse_proxy setup?
is the current state of the caddy cache handler functional / supported by the api-platform distribution
how to implement/support BAN requests (or other fine grained cache invalidation) for caddy cache handler?
Souin supports the invalidation using the PURGE HTTP method. I already wrote a PR to set Souin in the api-platform/core project but they are busy with the v3.0 release. Maybe in a near future they'll review and probably merge it, I dunno. But if you use a decorator on the varnish purger and use the code I wrote in the PR, you'll be able to purge automatically the associated endpoints to the base route.
I am using Node.js & GraphQL to build an API server.
I was wondering how I can enable the feature that the schema will show automatically when using tools like Postman/GraphQL Playground to send requests to remote API endpoints(currently using heroku).
When I launch the server and GraphQL Playground on localhost, it will show the schema and docs on the right side like below.
But when I change the endpoint to remote server (heroku in this case), it won't show any schema or docs. The schema and docs sidebar just keeps loading like the picture below (I hide the url for privacy issue)
Also not showing in Postman
Do I need to configure something or use some package in my server so that it will return and show the schema automatically on client-side?
Thank you.
You might try either set introspection: true flag in the constructor parameters of your apollo-server or, if it is suitable for your needs, to deploy onto heroku the development version of your app.
An explanation:
To see the schema and docs, your client should obviously fetch it first.
This request for graphQL schema usually called an "introspection".
Having the full schema of backend graphQL API is undoubtedly convenient during the development.
However, when our app is running in the production environment, we usually should expect malicious clients. Therefore, sending introspection in the production mode might be unsafe, because it would give a lot of additional information to the potential attacker.
That's why the introspection turned off by default in the production environment for the current version of appolo-server.
I have a scenario where there is a necessity that the users reload the vuejs app to get the latest vue app version when i update the whole application in the server including the backend APIs.
Suppose there is a chance that a user using the vue app version 1.1 already on production, will still be continuing to use the same even after the update at server (ie to 1.2). In such cases the backend APIs might have changed and it would break.
Any short and easy ways/methods to solve the above?
One way is to send the version number 1.1 or 1.2 in all our API requests to the backend. Then you check the version in the backend and send a special error if it is an old version. In the frontend you handle this special error by refreshing the page. Also make sure that your new *.js files have a new name so the client browser will fetch the new version and not load any cached *.js files.
Before I asked, I`ve searched a lot and there are many articles about it. But my question is a little deeper.
I have an application using Nodejs Expressjs + MongoDB + Reddit + PM2 clustering mode + Bitcoin and card getaway + API system.
My problem is when I'm developing this application in real mode and it`s really awful. sometimes I release little updates in codes, and I press "pm2 log" it shows me some error in syntax or something else and I try to fix that and release again. During this time, the application with many users is down.
Also, I have to say something such as Bitcoin payment, needs real tests. Needs request and response from Blockchain. How can I have a test environment that I can test everything exactly same as real mode and then if everything was fine, then deploy that to real mode?
An environment that easy to code and test then easy to deploy? Can Mocha help me exactly what I need? I`m using PM2 clustering mode.
Your question is not a proper question, but rather a layer of questions, some opinion-based, some too broad to answer. But let's try breaking it down.
The stated problem is when I'm developing this application in real mode.... I release updates ... it shows me syntax error... application is down. I will read it as the main problem is that you're developing on a production environment. Let's forget for a while how lousy a practice that is, and let's focus on something constructive.
Let's define rough steps to take.
Live environment
The most pressing problem seems to be that you work on live app, where crashing it during dev means crashing it for your users as well. Let's deal with that.
Immediatelly change all your access codes, keys, usernames and passwords so that you store them in an environment file (which is safe to encrypt and backup, but not commited in source code), say, environment-prod.env.
Then create a second set of credentials for all the services that you use. For MongoDB, e.g. it's easy, just create a local database instance, called, say, test_database. For Reddit, create a second app, cal it my-app-test, for example. Some services might have an option to create a set of test credentials right there in the app, with others you'll simply have one app for test, one for production.
Create a new environment file, e.g. test-environment.env, with all the same keys (e.g. REDDIT_APPID, REDDIT_SECRET, MONGODB_URL, BLOCKCHAIN_GATEWAY_KEY etc), but new values.
Now, for one, you have a test environment. Make an alias, e.g. alias dev="cd $HOME/projects/my-reddit-bitcoin-app && source test-environment.env". Every day you come to work on the app, type dev, then you can start pm2 etc and work safely in dev environment. Your users will never see your crashes.
Only when you're sure you have a new feature or bugfix completed, switch environment (source environment-production.env) and then deploy the new app to the server where it runs, and pm2 restart or whatever you use for these deployments. Switch back to test env immediately before working on the code anew.
Read up more on how to separate test/prod environments. Read a bit on git workflows (e.g. branch off of latest master to a feature-branch or bugfix branch, when tested, merge it back in. Then tag it "release-" and deploy to production. Then go automate all that if possible.)
Testing
Mocha is perfectly suitable for running tests for a Node/Express app. It's the tests that matter.
You say bitcoin payment....Needs request and response. Let's see how to do that.
Add [nock])(https://www.npmjs.com/package/nock) to your app (npm i -D nock).
import it and put it on top of your test file. E.g. at the top of the some-test.spec.js file:
const nock = require('nock')
start recording requests e.g. add this in your before() block in the app:
describe('My tests', function () {
before(function () {
nock.recorder.rec();
});
// ... tests
Now, run one test at a time (e.g. write one test that does one specific task from your app) and check what's in the console. E.g. if you make a request (request.post('http://reddit.com/api/submit', jsonData)), you'll see nock printing the exact response (in JSON format) in console, as the test runs. Copy that in the test file, e.g. put it at the bottom as:
var testResponse = <whatever was in the console in json format. Or string, whatever>. // homework is to find out why var and not const, if this is at the end of the file.
Now stop the recorder (comment it out), and in your actual test, run this instead:
const pipe = nock('http://www.example.com')
.get('/resource')
.reply(200, testResponse);
do that for all your requests.
Now what you have is a test setup so that when you change the code, it should not run against the real Reddit api, or real payment gateway api, but get your mocked responses instead. Pair it up with some good assertions and you should be fine. Make sure you mock everything. If you add new types of requests, make sure to record them, and add them to your procedure.
Now, all this is very vague. Broad. Just one way to do it. Lengthy process. Probably not the best one. Not tailored to your specific conditions. But it should get you started. Take those things, step by step, and if you get stuck, come back to Stackoverflow. But do start working on it, because your current method seems to be unsustainable in the long run.
I'm migrating a combined Azure Website (with both Controllers and ApiControllers) to a split Web App and API App. Let's call it MyApp.
I've created MyAppDevApi, MyAppTestApi, and MyAppProductionApi API Apps (in different App Services) to host the three environments, expecting to promote code from one environment to another.
So far, I've only deployed to MyAppDevApi, since I'm just getting started.
When I do Add/Azure API App Client to my UI-only project to start referring to the API app, and I point it to MyAppDevApi, it uses AutoRest to create classes in my code. These classes now all have the name MyAppDevApi, rather than just MyAppApi, which is the actual namespace of the code I'm deploying to every environment. Obviously, I can't check that in... how can I promote that through Test and Prod?
There's nothing in the Swagger JSON that refers to this name, so it must be on the AutoRest side (I think).
Has anyone come up with a strategy or work-around to deal with this multiple-environment promotion issue with API Apps?
Edit
So far the best thing I've come up with is to download the Swagger from the API App to a local file (which, again, has only the namespace from the original code, not from the name of the API App), and then import it into the Web App. This will generate classes in the Web App that have the naming I expect.
The problem is that I then have to edit the generated MyAppApi.cs file's _baseUri property to pull from an AppSetting, have the different web.config.dev, .test, .prod, and then do the web.config transform. I mean, that'll work, but then every time I change the API App's interface, I'll regenerate... and then I'll have remember to change the _baseUri again... and someone, sometime is going to forget to do this and then deploy to production. It's really, really fragile.
So... anyone have a better idea?
I'm not quite sure why you're creating three different apps, one for each environment? One application is fine and use web.config transforms for each environment. This is the general way I do all of my apps and works fine.
Information about how to apply web.config transforms can be found here which may help in your situation.
Hope that helps.
Well, here's how I've solved this:
Download Swagger file from API App to local hard drive.
Import local Swagger file into Web App to generate classes that have the naming from my code, not from the environment.
Use AppSettings to specify the environment-specific settings to point to the API App. This can be either a web.config transform, or you can just specify them in the Azure Portal on the Web App in Application Settings.
Instantiate the generated API App Client using the constructor that takes in a URL to point to the API App (these are at class level, hence static):
private readonly static Uri apiAppUrl = new Uri(CloudConfigurationManager.GetSetting("ApiAppUrl"));
private readonly static MyAppApi myAppApi = new MyAppApi(apiAppUrl);
I'd still love a solution to this that doesn't require downloading the Swagger file, but, all in all, if that's the only workaround necessary, it's not all that bad.