I'm working on a project that has many small tasks. Some of these tasks are related and require overlapping apis.
task_1/
main.py
task_2/
main.py
apis/
api_1/
api_2/
api_3/
test/
test_api_1.py
test_api_2.py
test_task_1.py
test_task_2.py
test_task_3.py
For example, task_1 needs api_1 and api_3, while task_2 needs api_1 and api_2. At first I tried using Google Cloud Functions to execute these tasks, but I ran into the issue that GCF needs local dependencies installed in the same folder as the task. This would mean duplicating the code from api_1 into task_1. Further, local testing would become more complicated because of the way GCF does imports (opposed to .mylocalpackage.myscript):
You can then use code from the local dependency, mylocalpackage:
from mylocalpackage.myscript import foo
Is there a way to structure my codebase to enable easier deployment of GCF? Due to my requirements, I cannot deploy each API as its own GCF. Will Google Cloud Run remedy my issues?
Thanks!
To use Cloud Functions for this, you will need to arrange your code in such a way that all the code a function depends on is present within that function's directory at the time of deployment. This might be done as a custom build/packaging step to move files around.
To use Cloud Run for this, you need to create a minimal HTTP webserver to route requests to each of your "functions". This might be best done by creating a path for each function you want to support. At that point, you've recreated a traditional web service with multiple resources.
If these tasks were meant as Background Functions, you can wire up Pub/Sub Push integration.
Related
We are running a larger backend application in NodeJS/TS on Firebase with about 180 cloud functions and Firestore as database. Firebase has been good for our needs so far, but we are getting to a level of usage where even small amounts of down-time can cause a lot of damage. Due to the amount of cloud functions a full deploy could take up to 30 minutes, we therefore usually only do partial deploys of changed functions only, which still take about 10 minutes. I am trying to find a way to be able to do quick rollback to previous version of a given function in case a bug is discovered after a production deploy. Firebase does not seem to provide rollback functionality, so the only option is to re-deploy the code with the previous version. One issue is the deploy time (up to 10 min for a single function), and the other is git versioning when there are partial deploys. Normally there would be a branch reflecting exactly what is in prod that could be used, but with partial deploys this is no longer the case. The only alternative for maintaining good git versioning with one to one branch with prod is to do a full deploy every time, but this takes a prohibitive amount of time (30+ minutes not including retries). The firebase deploy usually fail or exceed deployment quota as well, which makes things like CI pipelines very difficult (it would have to automatically retry failed functions, and the time is still an issue since 30+ min to deploy is not acceptable in the case of down-time). Has anyone found a good solution for roll-back (versioning) and a git structure that works well with firebase at scale?
Cloud Functions for Firebase is based on Cloud Functions and their behavior are the same. And today, it's not possible to route the traffic to a previous version (and to perform a rollback). (And I can also told you that NodeJS16 is now GA, instead of Beta as still mentioned in the Cloud Functions for Firebase documentation)
The next Cloud Functions runtime is cooking (and available in preview). That runtime is based on Cloud Run under the hood, that allow traffic splitting/routing, and therefore accept rollback.
So, for now, you haven't solution to perform a simple rollback with Firebase functions. A great change could be to use Cloud Functions V2 runtime directly, or event Cloud Run, but it's a big change in your code base.
Another solution could be to use a load balancer in front of all your functions and to:
Deploy new function under new name (no update of the current deployment, create a new service each time that you deploy a new version)
Create a new serverless backend with the new functions
Update the URL map to take into account the new backend.
After a while, delete the old function versions.
That also requires a lot of work to put that in action. And the advertising delay when you update your URL map should be between 3 and 5 minutes, not a such great advantage compare to your current solution.
it looks like your not the only one. previous questions answered. I recommend setting up some version control.I would solve the failing deploy issues first which should reduce the deploy time and redeploy times specifically if its multiple . You could use a different deploy branch or setup a staging environment as well. I would invest the time in getting the GIT control setup/turnkey.
Per user Ariel:
Each time you make a deploy to a cloud function you get an output line like this:
sourceArchiveUrl: gs://my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
I entered my Google Cloud Platform Developer Console -> Cloud Functions -> function_name -> Source tab
and there almost at the bottom it says: Source location
my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
the same as it was shown in the CLI, but without gs:// that link lead me to the following: https://storage.cloud.google.com/my-store-bucket/us-central1-function_name-........
I removed from the link everything that came after
https://storage.cloud.google.com/my-store-bucket
and that lead me to a huge list of files that each one of them represented a an image of all my cloud functions at the time point of each time i have made a deploy, exactly what i needed!
The only thing left to do was to locate the file with the last date before my mistaken deploy
source: Retrieving an old version of a Google Cloud function source
as of 2019
Rolling back to an older version of a firebase function (google cloud function)
2021:
Roll back Firebase hosting and functions deploy jointly?
You can roll back a Firebase Hosting deployment, but not the functions without using a GIT Version control etc. Using partials you can deploy multiple functions/Groups. You can checkout remote config templates to rollback and its kept for up to 90 days.
https://firebase.google.com/docs/remote-config/templates
Firebase partial deploy multiple grouped functions
https://firebase.google.com/docs/cli#roll_back_deploys
I'm fairly new to Function Apps, anyways we have about a dozen small programs currently running as Windows scheduled tasks on an Azure VM and we are looking to migrate these to PaaS. Most of these are small console type background processes that might make an API call, perform a calc, and store the result in a db, or maybe read some data from a db and then send out an email. We have a mixture of pwsh, Python, and .NET.
Anyways I was wondering how many repos I should have? I assume I would need at least 3 (one for each runtime stack). Also I didn't want to create a separate repo per app and end up having 50 git repos eventually. I don't know if it's best just to make some root level folders using the app names in the repo to keep the structure separate looking?
Lastly should each of these apps be hosted in their own function app (Azure Resource), or can I have several of the apps hosted in a single FA? I'm not sure if splitting everything up into separate function apps would make deployment easier or not. I guess ease of long term support/maintenance would be the most important aspect to me.
Short version: What is considered the best practice for logically grouping and creating the relationship/mapping between your apps, number of git repos, and azure function app resources?
I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.
I'm looking into using a worker as well as a web for the first time as I have to scrape a website. I'm just wondering before I commit to this about working in a dev environment. How do jobs in a queue get handled when I'm testing my app before it's pushed to Heroku?
I will probably be using RabbitMQ if that's relevant here.
I guess it depends on what you mean by testing. You can unit test the code that does the scraping in isolation from any queue, and you can provide a mock implementation of the queue operations to handle a goodly portion of your integration tests.
I suppose you might want a real instance of the queue for certain tests, but depending on the nature of your project, you might be satisfied with the sorts of tests described in the first paragraph.
If you simply must test the queue operation and/or you want to run a complete copy of production locally then you'll have to stand up an instance of Rabbitmq. You can stand one up locally or use one of the SAAS providers.
If you have multiple developers working on the project, you might want to make it easy for them by creating something like a vagrant script that sets up a complete environment in a vm. Or better still something like docker. Doing so also gives you a lot more deployment options (making you less dependent on the heroku tooling).
Lastly, numerous CI solutions like Travis CI provide instances of popular services for running tests (including rabbit).
What is the recommended workflow to debug Foxx applications?
I am currently working on a pretty big application and it seems to me I am doing something wrong, because the way I am proceeding does not seem to be maintanable at all:
Do your changes in Foxx app (eg new endpoints).
Upload your foxx app to ArangoDB.
Test your changes (eg trigger API calls).
Check the logs to see if something went wrong.
Go to 1.
i experienced great time savings, shifting more of the development workflow to the terminal client 'arangosh'. Especially when debugging more complex endpoints, you can isolate queries and functions and debug each individually in the terminal. When done debugging, you merge your code in Foxx app and mount it. Require modules as you would do in Foxx, just enter variables as arguments for your functions or queries.
You can use arangosh either directly from the terminal or via the embedded terminal in the Arangodb frontend.
You may also save some time switching to dev mode, which allows you to have changes in your code directly reflected in the mounted app without fetching, mounting and unmounting each time.
That additional flexibility costs some performance, so make sure to switch back to production mode once your Foxx app is ready for deployment.
When developing a Foxx App, I would suggest using the development mode. This also helps a lot with debugging, as you have faster feedback. This works as follows:
Start arangod with the dev-app-path option like this: arangod --javascript.dev-app-path /PATH/TO/FOXX_APPS /PATH/TO/DB, where the path to foxx apps is the folder that contains a database folder that contains your foxx apps sorted by database. More information can be found here.
Make your changes, no need to deploy the app or anything. The app now automatically reloads on every request. Change, try out, change try out...
There's currently no debugging capabilities. We are planning to add more support for unit testing of Foxx apps in the near future, so you can have a more TDD-like workflow.