AWS Reverse Migration Set-Up localhost from AWS Console - python-3.x

I have been building a web application using python and AWS console on a borrowed computer for the past month.
I recently obtained a new computer and I am trying to change from developing my app online in AWS console to offline on my localhost.
Online, I have already an existing API, lambda Fucntions, Api Gateway, DynamoDb Tables.
Offline,I have the following tools installed: Linux, Pycharm, python 3.9 AWS CLI 2, AWS SAM CLI, Docker.
My misunderstanding lies in how to replicate the organization of the directories on my local computer.
And is there a simple command to import or clone or set-up my entire app/api locally
Any advice, direction, documentation or tutorials related to this reverse migration issue would be greatly appreciated.
Thank You

Related

Deploying docker image as IBM Cloud Action with credentials

I have a small NodeJS app I want to deploy to IBM Cloud as an "action". What I've been doing until now is just zipping the project files and creating/updating actions using the IBM Cloud CLI like this:
ibmcloud fn action create project-name C:\Users\myuser\Desktop\node-js-projects\some-project\test-folder.zip --kind nodejs:12
This was working great, however I'm now testing a new project which has a much larger modules folder, and as such IBMCloud won't accept it. I've turned my attention to using Docker as the below article explains.
https://medium.com/weekly-webtips/adding-extra-npm-modules-to-ibm-cloud-functions-with-docker-fabacd5d52f1
Everything makes sense, however I have no idea what to do with the credentials that the app uses. Since IBM Cloud seems to require you to run "docker push" I'm assuming it's not safe to include a .env file in the docker image?
I know in IBM Cloud I can pass "parameters" to an action but not sure if that helps here. Can those params be accessed from a piece of code deployed this way?
Would really appreciate some help on this one. Hoping there's a straightforward standard way of doing it that I've just missed. I'm brand new to docker so still learning.

How to deploy vue cli project to aws lightsail

I am a beginner at deploying web apps, however I have some experience developing them. I just finished a project I amde using node, express, and vue. I am trying to deploy it to AWS Lightsail. So far what I have done is create a nodejs instance and a MySQL database on lightsail. I have a backend folder connects to the database as well as some API's that are used in the front end. I also have a front end folder which contains the entire vue project. How shold I go about deploying this to lightsail? Are there any good references or videos I could use? I tried following the AWS documentation but it wasnt very helpful. Any help would be appreciated.

How to call Oracle DB from Google Cloud Functions in Node.js

I'm trying to create a small Google Cloud Function (GCF) that will query Oracle DB and send email. I'm looking into using Node.js. I wasn't able to find anything useful, the only closes I found was a post regarding GCF to Oracle with Python. Please let me know if there is a way to call Oracle DB from GCF
In summary it's not possible because you need to install Instant Client and you can't on Cloud Function environment. Same issue with AppEngine Standard.
I'm writing an article on Medium on this. I'm waiting a bug fix on Cloud Run and the validation of Google (because some things can be confidential in it) before publishing it.
There is 2 workaround:
build a container (simply put an express server in front of your function, that's all!).
If you need to reach OnPrem Oracle DB you could deploy your container on AppEngine Flex (However don't scale to 0) and set up a serverless VPC Connector
If you don't need a serverless VPC connector, you will be able to deploy on Cloud Run in a couple of weeks, after the bug rollout
Use Java. You will have to download the Oracle Jar driver manually and to install it manually in Maven/Gradle, but then it work anywhere, even on AppEngine standard.
UPDATE
The bug with Cloud Run and GVisor has been solved. Here my article

AWS: How to reproduce NodeJS project?

I need help with someone familiar with AWS and web servers. Currently I'm walking through this tutorial trying to get started with NodeJS and AWS. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.html
I'm trying to figure out how to essentially do a "git clone" of a traditional project but do whatever equivalent that is for an AWS project (ex: If I wanted to work on my existing AWS project on a different machine)
I read some EB CLI documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-cmd-commands.html). I tried the "eb clone env-name". However, this actually created a separate environment on AWS within my application, which isn't what I wanted. It also only added a .gitignore and a .elasticbeanstalk folder to my directory, none of my source code for my AWS application.
I'm confused on what the standard process is for working with AWS projects. Particularly, how can I start working on my existing AWS project from another machine? (Is there anyway to pull my source code from AWS project?) Is there anyway I can view my code on AWS?
Side note: In the past I worked with Google Apps Scripts on the cloud, which used Clasp CLI for pushing and pulling code to the cloud. This was very intuitive because it was literally clasp pull to pull code from cloud and clasp push to push code to it.
Elastic Beanstalk isn't a code repo. It's a way to host applications in a simplified way, without having to configure the compute resources. Compare this to something like EC2 where all the networking and web server configuration is manual.
You can still use git to manage your source code, and there's git CLI integration with Elastic Beanstalk too. Once you've got your source code working, you bundle it up into a .zip file and upload it to EB. You can also use AWS CodeBuild to watch git repos, build source code into bundles, and automatically deploy it to Elastic Beanstalk.
If you are looking for a way to host source code on AWS, AWS CodeCommit is the managed git solution.
You should take a look at the Amplify Framework by AWS: https://aws-amplify.github.io/docs/ – here's a walkthrough that will get you were you are heading faster – sure, it mentions teams but, the result can be applied to single developers too: https://aws-amplify.github.io/docs/cli/multienv?sdk=js
Since you mentioned "view my code on AWS", you should have a look here: https://aws.amazon.com/cloud9/ – this will walk you through setting up an account, repos and working with your code on the cloud.
Good luck!

Google stackdriver debug not working in Kubernetes

We have a server application based on Python 3.6 running on Google Kubernetes Engine. I added Google StackDriver Debug to aid in debugging some production issues but I cannot get our app to show up in the Stackdriver debug console. The 'application to debug' dropdown menu stays empty.
The kubernetes cluster is provisioned with the cloud-debug scope and the app starts up correctly. Also, the Stackdriver Debugging API is enabled on our project. When running the app locally on my machine, cloud debugging works as expected, but I cannot find a reason why it won't work on our production environment
In my case the problem was not with the scopes of the platform, but rather with the fact that you cannot simply pip install google-python-cloud-debugger on the official python-alpine docker images. Alpine Linux support is not tested regularly and my problem was related to missing symbols in the C-library.
Alpine Linux uses the MUSL C-library and it needs a google cloud debugger specifically built for that library. After preparing a specific docker image for this, I got it to work with the provided credentials.
As an alternative method, you can debug Python pods with Visual Studio code and good old debugpy
I wrote an open source tool that will inject debugpy into any running Python pod without prior setup.
To use it, you'll need to:
Install the tool in your cluster (see Github page)
Run a command locally from a machine with access to the cluster:
robusta playbooks trigger python_debugger name=myapp namespace=default
Port-forward to the cluster (the tool prints instructions)
Attach VSCode to localhost and the port that you're forwarding
This works by creating a new pod on the same node and then injecting debugpy using debug-toolkit

Resources