I'm trying to write a script that will create a new google cloud project, then provision firebase resources to it, all using the Node SDK.
The first step is calling google.cloudresourcemanager("v1").projects.create, and I use an organization-level service account for that with the proper permissions which returns the proper Operation object on success.
The issue is that after this call, there's often a delay of up to several hours before a call to google.cloudresourcemanager("v1").projects.get or google.firebase({version: "v1beta1"}).projects.addFirebase works, and if you check in the console the project isn't there. The issue is not with permissions (authorization/authentication) as when I manually verify that a project exists then call those two functions, they work as expected.
Has anybody else experienced something like this?
thanks!
On the Official API Documentation it is mentioned this:
Request that a new Project be created. The result is an Operation which can be used to track the creation process. This process usually takes a few seconds, but can sometimes take much longer. The tracking Operation is automatically deleted after a few hours, so there is no need to call operations.delete.
This means that this method works in an asynchronous way. As the response can be used to track the creation of the project, but it hasn't been fully created when the API answers.
Also it Mentiones that it can take long time.
As this is how the API works all the SDKs including the NodeJS one will share this behavior.
Related
I have a webserver hosted on cloud run that loads a tensorflow model from cloud file store on start. To know which model to load, it looks up the latest reference in a psql db.
Occasionally a retrain script runs using google cloud functions. This stores a new model in cloud file store and a new reference in the psql db.
Currently, in order to use this new model I would need to redeploy the cloud run instance so it grabs the new model on start. How can I automate using the newest model instead? Of course something elegant, robust, and scalable is ideal, but if something hacky/clunky but functional is much easier that would be preferred. This is a throw-away prototype but it needs to be available and usable.
I have considered a few options but I'm not sure how possible either of them are:
Create some sort of postgres trigger/notification that the cloud run server listens to. Guess this would require another thread. This ups complexity and I'm unsure how multiple threads works with Cloud Run.
Similar, but use a http pub/sub. Make an endpoint on the server to re-lookup and get the latest model. Publish on retrainer finish.
could deploy a new instance and remove the old one after the retrainer runs. Simple in some regards, but seems riskier and it might be hard to accomplish programmatically.
Your current pattern should implement cache management (because you cache a model). How can you invalidate the cache?
Restart the instance? Cloud Run doesn't allow you to control the instances. The easiest way is to redeploy a new revision to force the current instance to stop and new ones to start.
Setting a TTL? It's an option: load a model for XX hours, and then reload it from the source. Problem: you could have glitches (instances with new models and instances with the old one, up to the cache TTL expires for all the instances)
Offering cache invalidation mechanism? As said before, it's hard because Cloud Run doesn't allow you to communicate with all the instances directly. So, push mechanism is very hard and tricky to implement (not impossible, but I don't recommend you to waste time with that). Pull mechanism is an option: check a "latest updated date" somewhere (a record in Firestore, a file in Cloud Storage, an entry in CLoud SQL,...) and compare it with your model updated date. If similar, great. If not, reload the latest model
You have several solutions, all depend on your wish.
But you have another solution, my preference. In fact, every time that you have a new model, recreate a new container with the new model already loaded in it (with Cloud Build) and deploy that new container on Cloud Run.
That solution solves your cache management issue, and you will have a better cold start latency for all your new instances. (In addition of easier roll back, A/B testing or canary release capability, version management and control, portability, local/other env testing,...)
We are running a larger backend application in NodeJS/TS on Firebase with about 180 cloud functions and Firestore as database. Firebase has been good for our needs so far, but we are getting to a level of usage where even small amounts of down-time can cause a lot of damage. Due to the amount of cloud functions a full deploy could take up to 30 minutes, we therefore usually only do partial deploys of changed functions only, which still take about 10 minutes. I am trying to find a way to be able to do quick rollback to previous version of a given function in case a bug is discovered after a production deploy. Firebase does not seem to provide rollback functionality, so the only option is to re-deploy the code with the previous version. One issue is the deploy time (up to 10 min for a single function), and the other is git versioning when there are partial deploys. Normally there would be a branch reflecting exactly what is in prod that could be used, but with partial deploys this is no longer the case. The only alternative for maintaining good git versioning with one to one branch with prod is to do a full deploy every time, but this takes a prohibitive amount of time (30+ minutes not including retries). The firebase deploy usually fail or exceed deployment quota as well, which makes things like CI pipelines very difficult (it would have to automatically retry failed functions, and the time is still an issue since 30+ min to deploy is not acceptable in the case of down-time). Has anyone found a good solution for roll-back (versioning) and a git structure that works well with firebase at scale?
Cloud Functions for Firebase is based on Cloud Functions and their behavior are the same. And today, it's not possible to route the traffic to a previous version (and to perform a rollback). (And I can also told you that NodeJS16 is now GA, instead of Beta as still mentioned in the Cloud Functions for Firebase documentation)
The next Cloud Functions runtime is cooking (and available in preview). That runtime is based on Cloud Run under the hood, that allow traffic splitting/routing, and therefore accept rollback.
So, for now, you haven't solution to perform a simple rollback with Firebase functions. A great change could be to use Cloud Functions V2 runtime directly, or event Cloud Run, but it's a big change in your code base.
Another solution could be to use a load balancer in front of all your functions and to:
Deploy new function under new name (no update of the current deployment, create a new service each time that you deploy a new version)
Create a new serverless backend with the new functions
Update the URL map to take into account the new backend.
After a while, delete the old function versions.
That also requires a lot of work to put that in action. And the advertising delay when you update your URL map should be between 3 and 5 minutes, not a such great advantage compare to your current solution.
it looks like your not the only one. previous questions answered. I recommend setting up some version control.I would solve the failing deploy issues first which should reduce the deploy time and redeploy times specifically if its multiple . You could use a different deploy branch or setup a staging environment as well. I would invest the time in getting the GIT control setup/turnkey.
Per user Ariel:
Each time you make a deploy to a cloud function you get an output line like this:
sourceArchiveUrl: gs://my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
I entered my Google Cloud Platform Developer Console -> Cloud Functions -> function_name -> Source tab
and there almost at the bottom it says: Source location
my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
the same as it was shown in the CLI, but without gs:// that link lead me to the following: https://storage.cloud.google.com/my-store-bucket/us-central1-function_name-........
I removed from the link everything that came after
https://storage.cloud.google.com/my-store-bucket
and that lead me to a huge list of files that each one of them represented a an image of all my cloud functions at the time point of each time i have made a deploy, exactly what i needed!
The only thing left to do was to locate the file with the last date before my mistaken deploy
source: Retrieving an old version of a Google Cloud function source
as of 2019
Rolling back to an older version of a firebase function (google cloud function)
2021:
Roll back Firebase hosting and functions deploy jointly?
You can roll back a Firebase Hosting deployment, but not the functions without using a GIT Version control etc. Using partials you can deploy multiple functions/Groups. You can checkout remote config templates to rollback and its kept for up to 90 days.
https://firebase.google.com/docs/remote-config/templates
Firebase partial deploy multiple grouped functions
https://firebase.google.com/docs/cli#roll_back_deploys
Currently, I have a server running. Whenever I receive a request, I want some mechanism to start the scraping process on some other resource(preferably dynamically created) as I don't want to perform scraping on my main instance. Further, I don't want the other instance to keep running and charging me when I am not scraping data.
So, preferably a system that I can request to start scraping the site and close when it finishes.
Currently, I have looked in google cloud functions but they have a cap at 9 min max for every function so it won't fit my requirement as scraping would take much more time than that. I have also looked in AWS SDK it allows us to create VMs on runtime and also close them but I can't figure out how to push my API script onto the newly created AWS instance.
Further, the system should be extensible. Like I have many different scripts that scrape different websites. So, a robust solution would be ideal.
I am open to using any technology. Any help would be greatly appreciated. Thanks
I can't figure out how to push my API script onto the newly created AWS instance.
This is achieved by using UserData:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.
So basically, you would construct your UserData to install your scripts, all dependencies and run them. This would be executed when new instances are launched.
If you want the system to be scalable, you can lunch your instances in Auto Scaling Group and scale it up or down as you require.
The other option is running your scripts as Docker containers. For example using AWS Fargate.
By the way, AWS Lambda has limit of 15 minutes, so not much more than Google functions.
Background:
I am moving a legacy app that was storing images and documents on local disk on the web server, over toward a PaaS Azure web app and using Azure File Storage to store and serve the files.
Question:
I have noticed that sometimes the url for file download fails the first time, either image links on a page are broken until I refresh or a download fails the first time, then succeeds the next. I am guessing that this is due to some issue with how Azure File Storage works and it hasn't started up or something. The only consistent thread I have observed is that this seems to happen once or twice in the morning when I am first working with it. I am guessing maybe my account has to ramp up or something, so its not ready on the first go round. I tried to come up with steps to reproduce, but I could not reproduce the symptom. If my hunch is correct, I will have to wait until tomorrow morning to try. I will post more detailed error information if/when I can.
var fullRelativePath = $"{_fileShareName}/{_fileRoot}/{relativePath}".Replace("//","/");
return $"{_fileStorageRootUrl}{fullRelativePath}{_fileStorageSharedAccessKey}";
Thanks!
So its been a while but I remember I was able to resolve this, so I'll be writing this from memory. To be able to access an image from file storage via a URL, you need to use a SAS token. I already had, which is why I was perplexed about this. I'm not sure if this is the ideal solution, but what I wound up doing was just appending some random characters to the end of the url, after the SAS token, and that make it work. My guess is this somehow made it unique, which may have helped it bypass some caching mechanism that was behaving erratically.
I'll see if I can dig up working example from my archive. If so, I'll append it to this answer.
I've recently become "reacquainted" with Windows and I'm also new to .NET & C#. I'm trying to figure out a way to run a program on a Windows 2003 machine at all times (i.e. it runs when no one is logged in and automatically starts on server boot). I think I'm overcomplicating the problem and getting myself stuck.
This program, called Job.exe, normally runs in a GUI, but I do have the option of running it from the command line with parameters.
Because of the "always on" part, the first thing that comes to mind is to create a service. Ridiculously, I'm getting stuck on how exactly to run the executable (Job.exe) from within my Service1.cs file (did I mention I'm new to c#?).
A couple other points I'm stuck on regarding creating a service are how/where to configure desktop interaction since I want Job.exe to run totally in the background. Also, since OnStart is supposed to return to the OS when finished, I'm a little confused as to where I should put the code to execute the program; do I place it in my OnStart method or create a method that I then call from OnStart?
Last question on creating a service is about the parameters. Job.exe accepts two parameters in total, one static and one dynamic (i.e. could be defined via the service properties dialog in the services management console). I'd like to be able to create multiple instances of the service specifying a different dynamic parameter for each one. Also, the dynamic parameter should be able to accept a string array.
I'm sure there are options outside of creating a service, so I will take any and all suggestions.
Since you mentioned that you may be over-complicating the problem, you may consider using the Task Scheduler to run your application.
Using the Task Scheduler will allow you to make a "regular" desktop application which is arguably a simpler approach than creating your own service. Also, the Task Scheduler has many options that fit the requirements you touched on.
The simplest approach might be to create a service, reference the application's assembly, and call it's Main() method to start the application. The application itself could use Environment.UserInteractive to detect if it is running as a service or as a desktop application.
One thing to watch out for, as you mention, is that the Start method of a service (and the other control methods) is expected to return immediately-ish (in the timespan of the "Starting Service..." dialog) so you'll need to spin up a thread to run the main method. Something like this:
Thread t = new Thread(() => { MyApplication.Application.Main("firstParam", "secondParam"); });
t.Start();
The params could come from a file and the service can be configured with the file name as a parameter (see this article for one of many examples on how to do that), or the service could be configured as you mentioned to take the parameters and pass them along to the application's main method. These are both viable approaches. Either way, only one instance of a service can be running at a time, so you'd need to register the service multiple times with different names to configure different parameters.