serverless architecture how it works based on what criteria - linux

can anyone tell me how the serverless architecture works
and some people are saying this is the next technology? and is this help for Linux administration?

Serverless is a technology that you can use to create infrastructure as code to work with your cloud provider. An example would be if your company uses Amazon Web Services and you need to create a lambda function. You can do this via serverless and include several infrastructure properties such a virtual private cloud, which IAM roles to use, creating an s3 bucket, having your lambda listen to sns topics, deploying on multiple environments.
Currently our company uses Amazon Web Services in combination with the Hashicorp Stack, (Terraform, Vault, etc.), as well as serverless to create our IAC quickly.
As far as this being the next technology, I can say that maybe not serverless, but infrastructure as code is extremely powerful, reusable, fast failing, and useful.
An example could be you your work place has a production environment and a dev environment. You can deploy the same serverless project to dev and production and if you interpolate the values properly you have a serverless project that can be deployed on any of your environments.
Is technology helpful for a linux admin? I cannot attest to this as I have only used Serverless interactions with cloud providers. I believe that is what Serverless was created for.

Related

Is there a maximum limit on Workspaces in a Terraform Cloud Organization?

I am building a small server-less application on aws. It is a SaaS for business purposes so I am looking at ways to cater for multi-tenancy.
So far my proof-of-concepts have been single tenant and deployed via terraform.
I am thinking of using the Terraform Cloud Workspace API to create a workspace for each tenant on sign up. The work spaces would be configured to auto-apply from my production github branch.
I'm concerned that this isn't the intended usage of Terraform Cloud and that I may run into issues as the application scales.
Does anyone have any insight into the upper-limits of Terraform Cloud? I have read through some of Hashicorp's documentation but I can't find anything specific to this.

Keeping app specific variables when using continuous integration in IBM Cloud

I have an application written in Node.js that I am deploying to the IBM Cloud infrastructure. Everything works great as long as I have the environment variables for the app embedded in my manifest.yml file. This isn't ideal since it keeps these secure values within my GitHub repository.
I use a .env file for my local testing and placing that in my .gitignore is great to ensure that it doesn't roll out to the Git repo, but having to place the values into my manifest really defeats the purpose.
Is there a way to ensure that my environment variables are kept between CI runs that I store on my IBM Cloud apps without resorting to storing them in the manifest?
If you are using Cloud Foundry, then I would recommend to take a look at how Cloud Foundry integrates with services. It allows to bind a service to an app, thereby making the credentials available. If you already have some services, like another database, you can utilize the concept of user-provided service. There is no need to set variables, it is managed by Cloud Foundry.
Those concepts integrate well with the Continuous Delivery service on IBM Cloud.
where you run continuous integration? if you run on IBM Cloud Continuous Delivery you can set Environment Variable and provide access to your job to access it.
you can see the documentation in here.

Different environments in Azure shared app service plan

Recently I've started experimenting and getting familiar with some of the Azure offerings. I made a simple app, connected it with azure functions and azure storage as well as some other offerings like service bus for example.
So far so good, the app is working great and I got my feet wet with some great Azure services.
But now I'm unsure on how best to proceed because what I have so far is a development version of my app. If I wanted to make a prod version do I have to provision a different set of all the azure resources used for the dev version?
So basically, I would have mydevsite.azurewebsites.net and myprodsite.azurewebsites.net. Is this correct? I can restrict mydevsite.azurewebsites.net with some IP address restrictions so that is not publicly available but I still feel this is a hacky way of doing this and that there should be a better way.
Is there a common approach to a scenario like this?
This is a bit of a broad question, but I can tell how I have done it before.
A common setup would be three environments, Dev, Test and Production.
Dev mostly runs on the developer's machine (as much as it can). We use a local IIS installation to run the web app, and a local SQL Server as a database. Azure Storage and Cosmos DB can also be emulated locally. Certain services like Search for example can't be run locally so you would have to run those in Azure anyway.
Test and Production are basically two identical resource groups with the same resources, just configured slightly differently. So double the App Service Plans, SQL databases etc.
Depending on how you want to do it though, you can of course share resources across environments. But it is a good idea to somehow make sure they do not accidentally use the other environment's stuff. And the definite bad side of this is that you are putting production data in the same place as test data, which frankly should not be together.
I know some organizations run a Dev environment fully in Azure. There can be a couple reasons for this: very heavy environment which can't really run on dev machines, or they want to test ARM template deployment at dev stage too.
Having duplicated services allows you to use ARM templates for automatically deploying and updating the infrastructure, which is pretty nice.
If you are on Standard or higher, you might think to use Deployment Slots in App Service for different environments, but they are really not meant for that purpose. We use them to reduce application downtime when deploying a new version, and as a fallback if the update turns out bad. So the deployment goes to a "staging" deployment slot, which gets swapped with the other one, and the new version is live. We then stop the deployment slot so we are not running the older version in the background unnecessarily.
But otherwise we have a separate App Service Plan with separate Web Apps with their own staging slots.
Deployment slots documentation: https://learn.microsoft.com/en-us/azure/app-service/web-sites-staged-publishing

What does the Serverless Framework offers for Azure Functions?

I wonder what are the features/capabilities that Serverless Framework gives me when developing Node.js functions in Azure Functions environment?
When I look to this CLI, there is nothing I cannot do with Kudu and GitHub integration (or for simpler scenarios - directly with IDE built into the portal).
So I wonder if I am missing something (and I will regret in the future), or Serverless Framework at this stage is more useful for AWS Lambdas?
The Serverless Framework is a CLI that offers the following:
Deployment: it will zip your functions/modules and upload to Azure.
Integration: you can integrate with Blob Storage, DocumentDB and create HTTP endpoints with the same configuration file.
Multi-vendor support: if you decide to leave Azure, migrating to AWS or IBM will be easier.
Configuration: YAML syntax is readable and overall configuration is simple.
Logs: you can stream the logs into the terminal.
Environments: you can replicate a dev stage into production.
Plugins: you can extend the framework features by yourself or use things that the community is creating.
When I look to this CLI, there is nothing I cannot do with Kudu and GitHub integration
That's great. If you have already stablished a development workflow to organize and deploy your code, maybe you don't need the Serverless Framework. The framework was created to help new users to deploy stuff, but it it based on Azure tools, so there is no magic happening there. Just some people trying to make things easier than using the Azure CLI.
Serverless Framework at this stage is more useful for AWS Lambdas?
Maybe. Depends if you think that Azure CLI is better or not than AWS CLI. I have tried to implement my own code to deploy AWS Lambda functions and I know how hard it is. There are many configuration steps and things to learn, which are very trivial to configure when using the Serverless Framework.

Azure vs Cloud Foundry

I'm new to cloud foundry and would like to do a detailed comparison between Windows Azure and Cloud Foundry. I've searched around a lot but haven't been able to find anything useful. Is there a good post or some material which does a detailed feature wise comparison of the two?
Regards,
Vikram
You're not exactly comparing like for like here. Azure has IaaS type capability as well as PaaS, not only can you push applications to it but you can also deploy VM images too, including Linux.
However, as Cloud Foundry is open source the number of runtimes and frameworks if supports evolves quickly as VMWare openly encourage contributions from the OSS community. Correct me if I am wrong, but in a lot of cases, with Azure, you have to provide the runtime you wish to use where as Cloud Foundry supports them "natively", if that's the correct word?!
Right now, Cloud Foundry supports the following runtimes and frameworks;
Runtimes
java - 1.6.0_24
java7 - 1.7.0_04
node - 0.4.12
node06 - 0.6.8
node08 - 0.8.2
ruby18 - 1.8.7p357
ruby19 - 1.9.2p180
Frameworks
grails
java_web
lift
node
play
rack
rails3
sinatra
spring
standalone
They also provide all the major storage services too, including MySQL, Postgres, RabbitMQ and Redis.
The actual open source project supports a whole lot more too!
I don't know much about Azure. But, I've used Cloudfoundry. Its great for Java deployments. I use VMC from Ruby gem for deployment and its so 3 - 4 step easy way to push your WAR to cloudfoundry server. They also provide neat documentation for configuration and setup. Oh and adding services (like Mongo DB, MySql) are also very simple. Though,sometimes, debugging server related issues are annoying with it. But, overall, its good for me :)
http://docs.cloudfoundry.com/tools/deploying-apps.html
Likewise, I don't know much about Cloud Foundry but I'm using Windows Azure for a couple of client projects and I have to say that I'm now very impressed with the development environment. I'm using the Websites Preview feature with continuous Git deployment via BitBucket. Setting this up is a breeze and allows me to push my changes to BitBucket and have Windows Azure deploy them automatically for me. There's currently no ability to run unit tests as part of the deployment cycle as per other cloud platforms (e.g. AppHarbor) but the feature set for getting up and running with a .NET application and SQL Azure database is now pretty slick. Here's a couple of links:
Deploying an ASP.NET Web Application to a Windows Azure Web Site and SQL Database
Continuous Deployment with Windows Azure Websites and Bitbucket

Resources