I'm trying to setup a GCP PubSub service that will work with a push type subscription. However it's impossible to create one in the developement stage, while I have no accessible endpoints.
I assumed that the emulator would allow me to specify a local endpoint so that the service would run flawlessly in local.
However, after setting it up, I couldn't find a way in the Node.js pubsub library to create a subscription while specifying its options, there is no example for this.
This is the pretty simple way to create a simple, default, pull, subscription:
await pubsub.topic(topicName).createSubscription(subscriptionName);
Here is an example of how you would set up push subscription. It is the same as how you would set it up if you were running in the actual Pub/Sub environment. Specify ‘pushEndpoint’ as your local endpoint. When running on the emulator, it will not require authentication for your endpoint.
You can do something like the following:
// Imports the Google Cloud client library
const {PubSub} = require('#google-cloud/pubsub');
// Creates a client
const pubsub = new PubSub();
const options = {
pushConfig: {
// Set to your local endpoint.
pushEndpoint: `your-local-endpoint`,
},
};
await pubsub.topic(topicName).createSubscription(subscriptionName, options);
You should have an environment variable named "PUBSUB_EMULATOR_HOST" that points the the emulator host.
my local pubsub emulator have the following url - http://pubsub:8085 so i am adding the following env variable to the service that connects it:
export PUBSUB_EMULATOR_HOST=http://pubsub:8085
The following code should work:
const projectId="your-project-id";
// Creates a client. It will recognize the env variable automatically
const pubsub = new PubSub({
projectId
});
pubsub.topic(topicName).createSubscription(subscriptionName);
Related
When I setup the emulator and Terraform correctly, will I be able to run terraform with the results inside the emulator and not inside my project in Google Cloud?
I could not find an answer on the web and cannot start before I know.
Thanks in advance!
It seems user want to play with terraform and point it to the emulator.
https://cloud.google.com/spanner/docs/emulator
Please correct me if I'm wrong.
Yes you can! We use it to setup the Google PubSub emulator with our topic/subscription setup that we have in the production environment.
The trick is that you need to override the API Endpoints in the provider configuration:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.33.0"
}
}
}
provider "google" {
project = "some-project-id"
pubsub_custom_endpoint = "http://localhost:8085/v1/"
}
To apply this then, I start the emulator like this:
$ gcloud beta emulators pubsub start --project=some-project-id
Note:
The project-id is specified via the argument and must match the project-id you configure in the terraform provider
Port 8085 is the default port the emulator starts on
Drawbacks
Since you're overriding only specific endpoint, you must be careful which resources you create. For example, creating a google_service_account will sent that request to the actual Google endpoint.
There are not emulators for every Google service, but there are a few.
I'm new to Azure Deployment & DevOps,
This time I'm doing a small project to create a NestJS API with Azure App Service, using a customized docker image. Everything was deployed well with the right connection credentials environment variable to the database. However, when I tried to replace these environment variable with
config.service.ts
const PORT = process.env.APPSETTING_PORT;
const MODE = process.env.APPSETTING_MODE;
const POSTGRES_HOST = process.env.APPSETTING_POSTGRES_HOST;
const POSTGRES_PORT : any = process.env.APPSETTING_POSTGRES_PORT;
const POSTGRES_USER = process.env.APPSETTING_POSTGRES_USER;
const POSTGRES_PASSWORD = process.env.APPSETTING_POSTGRES_PASSWORD;
const POSTGRES_DATABASE = process.env.APPSETTING_POSTGRES_DATABASE;
, after create the app in azure I assign the values with this command on Azure CLI:
az webapp config appsettings set --resource-group <my_rs_group> --name <my_app_name> --settings WEBSITES_PORT=80 APPSETTING_POSTGRES_HOST=<my_host_name>\ APPSETTING_POSTGRES_PORT=5432 APPSETTING_POSTGRES_DATABASE=postgres \ APPSETTING_POSTGRES_USER=<my_user_name> APPSETTING_POSTGRES_PASSWORD=<my_host_pw>\ APPSETTING_PORT=80 APPSETTING_MODE=PROD APPSETTING_RUN_MIGRATION=true
It's been 2 days I'm bothered with this issue and have been going through many similar threads on the site, but I couldn't resolve this issue, the docker container runtime log always show that these environment variables are failed to be applied.
I'm answer my own question since I found the thread that resolves this issue, although for some reason this problem is unpopular, linking it down here,
404 Error when trying to fetch json file from public folder in deployed create-react-app
It's vexing that Microsoft didn't make it obvious in their documentation that for nodejs environment, need to set const {env} = process; and access it through PORT = env.APPSETTING_PORT; for example will help the server to apply the app settings variable properly at docker runtime.
Hope whoever meets this issue will not waste time.
I have followed this guide https://github.com/microsoft/FluidFramework/tree/main/server/routerlicious and I have set up successfully a Routerlicious server and the gateway https://github.com/microsoft/FluidFramework/tree/main/server/gateway, but I am having trouble connecting the client to it.
Here is the client code config:
...
const hostUrl = "http://localhost:3000";
const ordererUrl = "http://localhost:3000";
const storageUrl = "http://localhost:3000";
const tenantId = "unused";
const tenantKey = "unused";
const serviceRouter = new RouterliciousService({
orderer: ordererUrl,
storage: storageUrl,
tenantId: tenantId,
key: tenantKey,
});
const container = await getContainer(
serviceRouter,
documented,
ContainerFactory,
createNew
);
...
The error is giving me is :
Buffer is not defined
I guess it is because of the tenantId and tenantKey. How can I solve this?
It sounds like you already got Routerlicious started, if so, jump to the last header. To recap...
Routerlicious
Cloning the Fluid Framework repository,
Installing and starting Docker
Running npm run start:docker from the Fluid Framework repo root.
Routerlicious is our name for the reference Fluid service implementation.
To get Routerlicious started, npm run start:docker uses the pre-built docker containers on Docker Hub. These are prepackaged for you in a docker-compose file in the repository.
Gateway
Gateway is a reference implementation of a Fluid Framework host. Gateway let's you run Fluid containers on a website. More specifically, it's a docker container that runs a web service. That web service delivers web pages with a Fluid Loader on it.
While that's a good option for running any Fluid container on one site, it is probably easier to use your strategy. That is... create your own website that connects directly to the service and loads your Fluid container.
Connecting to Routerlicious
To connect to your local Routerlicious instance, you need to know the orderer url, storage url, tenant id, and tenant key. By default, we provide a test local key in the config. The test tenant id is "local" and the test key is "43cfc3fbf04a97c0921fd23ff10f9e4b".
I have created some functions that upload and download files from firebase storage with cloud functions using the Firebase SDK and they work.
I would like for the functions to be executed as admin so that they don't need to abide by the storage rules.
I have replaced the firebase SDK with the admin SDK but I found out that my firebase.storage().ref reference doesn't work anymore and by reading around some docs I have realized I now need to use the google cloud services system instead.
So my question is, is there a way to have a cloud function have administrator powers on the entire firebase project without having to switch to google cloud functions and if not, is there a work around to do that so that I can somehow authorize my cloud function to have full read/write powers on the entire storage? I am puzzled!
Here is a snippet of my code:
const firebase = require('firebase-admin');
const functions = require('firebase-functions');
require("firebase-admin")
require("firebase")
require("firebase/storage");
var serviceAccount = require("serviceAccount.json");
var config = {
[...]
credential: firebase.credential.cert(serviceAccount)
};
firebase.initializeApp(config);
var storage = firebase.storage();
var storageRef = storage.ref(); //This returns .ref() is not a function
The Firebase client libraries are not intended to work on backend server environments. The Firebase Admin SDK is meant for backends, but the API to access Cloud Storage is different than the client SDK. The Admin SDK just wraps the Cloud Storage server SDKs. So for node environments, you are actually just going to use the Cloud Storage Node.js client.
When you call:
const admin = require('firebase-admin')
const storage = admin.storage()
you are getting a Storage object from the node SDK. It doesn't have a ref() method. You will need to get a Bucket object and use that instead:
const bucket = storage.bucket()
From here, you should continue to use the API docs I'm linking to.
I would like to know how to grant a Google Cloud Platform App Engine project permissions to serve content from Google Cloud Storage without setting the Google Cloud Storage bucket permissions to ‘share publicly'.
My App engine project is running Node JS. Uses Passport-SAML authentication to authenticate users before allowing them to view content, hence I do not want to set access on an individual user level via IAM. Images and videos are currently served from within a private folder of my app, which is only accessible once users are authenticated. I wish to move these assets to Google Cloud Storage and allow the app to read the files, whist not providing global access. How should I go about doing this? I failed to find any documentation on it.
I think this might work for you https://cloud.google.com/storage/docs/access-control/create-signed-urls-program
I can't seem to find the API doc for nodejs (google is really messing around with their doc urls). Here's some sample code:
bucket.upload(filename, options, function(err, file, apiResponse) {
var mil = Date.now()+60000;
var config = {
action: 'read',
expires: mil
};
file.getSignedUrl(config, function(err, url) {
if (err) {
return;
}
console.log(url);
});
As stated in the official documentation:
By default, when you create a bucket for your project, your app has
all the permissions required to read and write to it.
Whenever you create an App Engine application, there is a default bucket that comes with the following perks:
5GB of free storage.
Free quota for Cloud Storage I/O operations
By default it is created automatically with your application, but in any case you can follow the same link I shared previously in other to create the bucket. Should you need more than those 5GB of free storage, you can make it a paid bucket and you will only be charged for the storage that surpasses the first 5 GB.
Then, you can make use of the Cloud Storage Client Libraries for Node.js and have a look at some nice samples (general samples here or even specific operations over files here) for working with the files inside your bucket.
UPDATE:
Here there is a small working example on how to use the Cloud Storage client libraries to retrieve images from your private bucket without making them public, by means of authenticating requests. It works in a Cloud Function, so you should have no issues in reproducing the same behavior in App Engine. It does not perform exactly what you need, as it displays the image in the bucket alone, without any integration inside an HTML file, but you should be able to build something from that (I am not too used to work with Node.js, unfortunately).
I hope this can be of some help too.
'use strict';
const gcs = require('#google-cloud/storage')();
exports.imageServer = function imageSender(req, res) {
let file = gcs.bucket('<YOUR_BUCKET>').file('<YOUR_IMAGE>');
let readStream = file.createReadStream();
res.setHeader("content-type", "image/jpeg");
readStream.pipe(res);
};