How to connect to a Routerlicious local server? - fluid-framework

I have followed this guide https://github.com/microsoft/FluidFramework/tree/main/server/routerlicious and I have set up successfully a Routerlicious server and the gateway https://github.com/microsoft/FluidFramework/tree/main/server/gateway, but I am having trouble connecting the client to it.
Here is the client code config:
...
const hostUrl = "http://localhost:3000";
const ordererUrl = "http://localhost:3000";
const storageUrl = "http://localhost:3000";
const tenantId = "unused";
const tenantKey = "unused";
const serviceRouter = new RouterliciousService({
orderer: ordererUrl,
storage: storageUrl,
tenantId: tenantId,
key: tenantKey,
});
const container = await getContainer(
serviceRouter,
documented,
ContainerFactory,
createNew
);
...
The error is giving me is :
Buffer is not defined
I guess it is because of the tenantId and tenantKey. How can I solve this?

It sounds like you already got Routerlicious started, if so, jump to the last header. To recap...
Routerlicious
Cloning the Fluid Framework repository,
Installing and starting Docker
Running npm run start:docker from the Fluid Framework repo root.
Routerlicious is our name for the reference Fluid service implementation.
To get Routerlicious started, npm run start:docker uses the pre-built docker containers on Docker Hub. These are prepackaged for you in a docker-compose file in the repository.
Gateway
Gateway is a reference implementation of a Fluid Framework host. Gateway let's you run Fluid containers on a website. More specifically, it's a docker container that runs a web service. That web service delivers web pages with a Fluid Loader on it.
While that's a good option for running any Fluid container on one site, it is probably easier to use your strategy. That is... create your own website that connects directly to the service and loads your Fluid container.
Connecting to Routerlicious
To connect to your local Routerlicious instance, you need to know the orderer url, storage url, tenant id, and tenant key. By default, we provide a test local key in the config. The test tenant id is "local" and the test key is "43cfc3fbf04a97c0921fd23ff10f9e4b".

Related

redirect URI in Azure web app authentication

I have browsed various questions here on SO, but none seem to have helped.
So, I have the following setup on Azure. I had a simple flask app running, which I could access using https://xyz.azurewebsites.net.
I was trying to look at the example here (https://learn.microsoft.com/en-us/azure/active-directory-b2c/configure-authentication-sample-python-web-app?tabs=linux). I can reproduce this example fine when I have the local server running and specifying the redirect uri as http://localhost:5000/getAToken.
Now, I want to use my deployed app, so I changed the redirect URI in the azure portal under authentication as
https://xyz.azurewebsites.net/getAToken
However, this always returns the redirect URI mismatch error.
On the flask side, I have kept the configuration as:
REDIRECT_PATH = "/getAToken"
Although I tried putting the full absolute URL as well and it did not work.
I have followed the same document which you have provided and able to access the Application even after deploying to Azure App Service.
In app_config.py, change the authority_template to
authority_template = "https://{b2c_tenant}.b2clogin.com/{b2c_tenant}.onmicrosoft.com/{signupsignin_user_flow}"
OR
Copy paste the tenant and user_flow value directly.
authority_template = "https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{user_flow}"
Local Output:
Deploy the Application to Azure App Service:
Create a new repository in GitHub and push the VSCode to it.
OR
If you face any issues in pushing the code to Git.
Create a new repository, copy and clone the application which you have provided.
Your Repository:
And change the values in app_config.py accordingly (from your local VSCode).
In Azure Portal => Create a new App Service with Run time Stack Python.
From Deployment center => Deploy the code using GitHub Actions.
Add the Redirect URI of the deployed Application in App registration.
https://YourDeployedAppName.azurewebsites.net/getAToken
Here my deployed app name is myadb2c.So, update the Redirect URI as below.
https://myadb2c.azurewebsites.net/getAToken
***Workflow in GitHub Repository: ***
Deployed Application Output:

AZURE WEB APP: Problem: fatal: Authentication failed for 'webapp url'

Good day I am new on web developing and want to ask on how to fix this error in the terminal of Azure webapp service, git push azure main this is the command I keep inserting inside the terminal but the response is always this Password for <webapp url> and I don't know what password I should enter
therefore I browse the internet and still stuck on this, the fixes I tried is removing some credentials on windows credential, changing the HTTPS to SSHS, configuring global password, and lastly installing the GCM from github thank you very much
In Azure Portal, first we need to create Azure App service with the required run time stack.
You will get this option, if we deploy our App using Local Git.
We need to provide Credentials while pushing the code from local GitHub.
You will get the Credentials from Azure Portal => App Service.
Navigate to Azure Portal => Your App Service (which you have created in first step) => Deployment Center => Local Git/ FTPS credentials.
We can use the existing Application scope Username and Password or can create new User scope and use them.

Azure App service settings not applied with docker

I'm new to Azure Deployment & DevOps,
This time I'm doing a small project to create a NestJS API with Azure App Service, using a customized docker image. Everything was deployed well with the right connection credentials environment variable to the database. However, when I tried to replace these environment variable with
config.service.ts
const PORT = process.env.APPSETTING_PORT;
const MODE = process.env.APPSETTING_MODE;
const POSTGRES_HOST = process.env.APPSETTING_POSTGRES_HOST;
const POSTGRES_PORT : any = process.env.APPSETTING_POSTGRES_PORT;
const POSTGRES_USER = process.env.APPSETTING_POSTGRES_USER;
const POSTGRES_PASSWORD = process.env.APPSETTING_POSTGRES_PASSWORD;
const POSTGRES_DATABASE = process.env.APPSETTING_POSTGRES_DATABASE;
, after create the app in azure I assign the values with this command on Azure CLI:
az webapp config appsettings set --resource-group <my_rs_group> --name <my_app_name> --settings WEBSITES_PORT=80 APPSETTING_POSTGRES_HOST=<my_host_name>\ APPSETTING_POSTGRES_PORT=5432 APPSETTING_POSTGRES_DATABASE=postgres \ APPSETTING_POSTGRES_USER=<my_user_name> APPSETTING_POSTGRES_PASSWORD=<my_host_pw>\ APPSETTING_PORT=80 APPSETTING_MODE=PROD APPSETTING_RUN_MIGRATION=true
It's been 2 days I'm bothered with this issue and have been going through many similar threads on the site, but I couldn't resolve this issue, the docker container runtime log always show that these environment variables are failed to be applied.
I'm answer my own question since I found the thread that resolves this issue, although for some reason this problem is unpopular, linking it down here,
404 Error when trying to fetch json file from public folder in deployed create-react-app
It's vexing that Microsoft didn't make it obvious in their documentation that for nodejs environment, need to set const {env} = process; and access it through PORT = env.APPSETTING_PORT; for example will help the server to apply the app settings variable properly at docker runtime.
Hope whoever meets this issue will not waste time.

Test Google Cloud PubSub emulator with a push subscription

I'm trying to setup a GCP PubSub service that will work with a push type subscription. However it's impossible to create one in the developement stage, while I have no accessible endpoints.
I assumed that the emulator would allow me to specify a local endpoint so that the service would run flawlessly in local.
However, after setting it up, I couldn't find a way in the Node.js pubsub library to create a subscription while specifying its options, there is no example for this.
This is the pretty simple way to create a simple, default, pull, subscription:
await pubsub.topic(topicName).createSubscription(subscriptionName);
Here is an example of how you would set up push subscription. It is the same as how you would set it up if you were running in the actual Pub/Sub environment. Specify ‘pushEndpoint’ as your local endpoint. When running on the emulator, it will not require authentication for your endpoint.
You can do something like the following:
// Imports the Google Cloud client library
const {PubSub} = require('#google-cloud/pubsub');
// Creates a client
const pubsub = new PubSub();
const options = {
pushConfig: {
// Set to your local endpoint.
pushEndpoint: `your-local-endpoint`,
},
};
await pubsub.topic(topicName).createSubscription(subscriptionName, options);
You should have an environment variable named "PUBSUB_EMULATOR_HOST" that points the the emulator host.
my local pubsub emulator have the following url - http://pubsub:8085 so i am adding the following env variable to the service that connects it:
export PUBSUB_EMULATOR_HOST=http://pubsub:8085
The following code should work:
const projectId="your-project-id";
// Creates a client. It will recognize the env variable automatically
const pubsub = new PubSub({
projectId
});
pubsub.topic(topicName).createSubscription(subscriptionName);

How to get all running PODs on Kubernetes cluster

This simple Node.js program works fine on local because it pulls the kubernetes config from my local /root/.kube/config file
const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;
const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });
const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));
Now I want to run it as a Docker container on cluster and get all current cluster's running PODs (for same/current namespace). Now of course it fails:
Error: { Error: ENOENT: no such file or directory, open '/root/.kube/config'
So how make it work when deployed as a Docker container to cluster?
This little service needs to scan all running PODs... Assume it doesn't need pull config data since it's already deployed.. So it needs to access PODs on current cluster
Couple of concepts to grab your head around first:
Service account
Role
Role binding
To perform you end goal (which if i understand correct): Containerize Node js application
Step 1: Put application in a container
Step 2: Create a deployment/statefulset/daemonset as per you requirement using the container created above in step 1
Explanation:
In step 2 above {by default} if you do not (explicitly) mention a serviceaccount (custom) then it will be the default account the credentials of which are mounted inside the container (by default) here
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xxxx
readOnly: true
which can be verified by this command after (successful) pod creation
kubectl get pod -n {yournamespace(by default is default)} POD_NAME -o yaml
Now (Gotchas!!) if you cannot access the cluster with those credentials then depending on which service account you are using and the rights of that serviceaccount needs to be accessed. For example if you are using abc serviceaccount which does not have rolebinding to it then you will not be able to view the cluster. In that case you need to create (first) a role (to read pods) and a rolebinding (for that role) to the serviceaccount.
UPDATE:The problem got resolved by changing Config.fromKubeconfig() to Config.getInCluster() Ref
Clarification: fromKubeconfig() function is good if you are running your application on a node which is a part of kubernetes cluster and has cluster accessing token saved here: /$USER/.kube/config but if you want to run the nodeJS appilcation in a container in a pod then you need this Config.getInCluster() to load the token.
if you are nosy enough then check the comments of this answer! :P
Note: here the nodejs library in discussion is this

Resources