in my company, we are developing a bot with DialogFlow and find this platform quite convenient. However, we are experiencing a difficulty since every time that we have to move a version of the agent to the production environment through a restore of the development agent, we lose the training of the bot in production.
I would like to know if there is an equivalent way to move an agent to production environment without losing former training.
Thanks in advance,
Ricardo.
Related
While I'm trying to train new people on Terraform, I always find it quite cumbersome to have to deal with real infrastructure.
First, because it involves finding a non-sensitive cloud account or creating a new one, creating an identity for the new user (including setting-up some security stuff like two FA, ...), which could take some times (especially if you are in a traditional corporate environment where finding a CB to make payments is almost impossible).
Second, because as you are creating real infrastructure, you rapidly come into quirks that are impeding the learning curve, like the time it takes to create various types of infrastructure, the cost associated with some stuff, the need to deprovision them afterward since they are just tests, ...
Are you aware of any sandbox environment where it would be very easy to create infrastructure with Terraform (even not a real one), in order to concentrate on Terraform and stop wasting time on "side-stuff"? Do you share the same struggle?
Thanks in advance
Terraform does support LocalStack which is:
LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications. It spins up a testing environment on your local machine that provides the same functionality and APIs as the real AWS cloud environment.
So you could set it up and test it how it would suit your teaching requirements.
If you are in academia and are working with AWS, AWS offers AWS Educate for students for free. Thus, you could also use that for sandbox if possible.
I'm with a dilema here about which SE site to ask this question so please help me out if it should be somewhere else.
I've been looking into Infrastructure as Code solutions.
Didn't like Terraform too much. The lack of intellisense makes discoberability harder than programmers have been used to.
I've been considering ARM templates. I like it that the templates are made available as we create resources in the portal but it seems way less readable and harder to maintain afterwards.
Then I found out Pulumi and love their idea compared to Terraform. The way I see it, they're approach is also declarative like the above options but we can use decent programming languages to get the job done.
The for loops is a must.
Cool, I like that! But since we like using C# (or other alternatives), then why don't we SDKs to manage our infrastructure as code?
Pulumi has compared themselves with cloud SKDs by positioning their solution as much safer advocating that, if we just use a cloud SDK ourselves, then our solution wouldn't be that reliable.
To what extent is this really true, I wonder?
Last year, I wrote some libraries that used Azure service bus queues/topics. There were several integration tests that would run in parallel and I needed to isolate them by creating new queues/topics and used Microsoft.Azure.ServiceBus.Management.ManagementClient to do this.
It really didn't seem like I had to learn anything at all.
Going to the point now. Not discarding Pulumi's innovation which I think is great:
Will Pulumi's really add that much benefit compared to using Azure SDKs?
What's been your experience with it?
A Pulumi developer here, so I'm definitely biased. I suspect the SO community may find your question violating some of the guidance, but I hope my answer survives :)
One upside of using Pulumi is that you get access to multiple providers with consistent developer experience. You may be using exclusively Azure, but you might at some point start combining it with things like building and publishing Docker images, deploying Kubernetes applications, or Datadog dashboards. All can be done from the same program or solution.
Now, the biggest difference with imperative SDKs is the notion of desired-state configuration. A Pulumi program describes the graph of resources and dependencies between them (what), not the steps to provision them (how). When you have an environment that lives for months and years, there's a big difference between evolving a single definition with baby steps and applying incremental changes (Pulumi) and writing a bunch of update scripts/programs to bring each environment to the new state (SDK).
How do you maintain multiple environments that may be similar but still different? (production vs staging vs test vs dev) How do you make sure that your short-lived infra that you created for nightly tests reflects the reality of production? What happens when an SDK program fails in the middle - can you retry running it again or will it create duplicate resources/fail with another error? How do you get a simple overview of changes over time in git? Concurrency control? Change history?
All the things above are baked into Pulumi and require manual consideration with a cloud SDK.
Currently, our product is a web application with SQL Server as DBMS, ASP.NET backend, and classic HTML/JavaScript/CSS frontend. The product is actively developed and each month we have to deploy a new version of it to production.
During this deployment, we update all the components listed above (apply some SQL scripts, update binaries, and client files) but we deploy only the delta (set of files which were changed since the last release). It has some benefits like we do not reset custom data/configs/client adjustments.
Now we are going to move inside clouds like Azure, AWS, etc. Adjust product architecture to be compliant with the Docker/Kubernetes and provide the product as SaaS.
And now the question itself: "Which approach of deployment is recommended in the clouds?" Can we keep applying the delta only? Or we have to reorganize the process to always deploy from scratch?
If there are some Internet resources I have missed, please share.
This question is extremely broad but maybe some clarification could steer you in the right direction anyway:
Source code deployments (like applying delta's) and container deployments are two very different directions in the sense that the tooling you invest in during the entire SLDC CAN differ substantially. Some testing pipelines/products focus heavily (or exclusively) on working with one or the other. There will be tools that can handle both of course.
They also differ in the problems they're attempting to solve and come with some pro's and con's:
Source Code Deployments/Apply Diffs:
Good for small teams and quick deployments as they're simple to understand and setup.
Starts to introduce risk when you need to upgrade the Host OS or application dependencies
Starts to introduce risk when the Host's in production begin to drift (have more differing files then expected) more dramatically over time
Slack has a good write up of their experience here.
Container deployments
Provides isolation from the application (developer space) and the Host OS (sysadmin/ops space). This usually means they can work with each other independently.
Gives an "artifact" that won't change between deployments, ie the container tagged v1 will always be the same unless you do something really funky. You can't really guarantee this
The practice of isolating stateless components makes autoscaling those components very easy, and you can eventually spend more time on the harder ones (usually stateful).
Introduces a new abstraction with new concerns that your team will have to mature into. Testing pipelines, dev tooling, monitoring/loggin architectures might all need to be adjusted over time and that comes with cost and risk.
Stateful containers is hardly a solved problem (ie shoving an existing database in a container can be a surprising challenge).
In order to work with Kubernetes, you need to have a containerized application. That doesn't mean you need to containerize your entire product over night. Splitting out the front end to deploy with cloudfront/s3, and containerizing a stateless app will get your feet wet.
Some books that talk about devops philosophies (in which this transition plays a part)
The Devops Handbook
Accelerate
Effective Devops
SRE book
I have an issue where heroku apps goes idle every 2mins, which makes the user wait a few minutes to get a payment request. Is there a way to solve this issue? Also what are some good alternatives to heroku? I'm also seeing low scores on Google Page Insights and I'm guessing that's because heroku is idle most of the time. My app is built in nodejs and react
There are many different deployment solutions available; It really depends what kind of functionality is important to you as a developer. There are other Heroku-esque solutions, such as Netlify who over CD as well as Git integration. Any cloud provider will also be able to do a good job of hosting your application at a low cost. I personally, use Amazon Web Services, but Google Cloud, and Microsoft Azure will do an equally good job.
You can deploy your Node/React app to an AWS EC2 instance, as long as you're reasonably proficient at Linux system configuration, and it will be extremely low cost and very fast.
I have been tasked with creating a scheduled job to first call an api, convert the response to a new format and then pass that data to another api. It doesn't sound like there is any logic in between
The company I work for has a lot of SSIS packages doing a variety of things but also has a healthy Azure platform with a few web jobs running. Several developers on my team have expressed a dislike for SSIS packages so I would like to implement this in Azure, but I want to make sure that is the most reasonable thing to do.
What I am asking for is a pro con list where each option is strong or weak. A good answer will assist readers in making a decision on if their specific situation is best solved using a SSIS package or an Azure webjob, assuming the needed environment is setup for either already.