CUDA app on amazon EC2 [closed] - amazon

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
I have a GPU application(C# windows app) that runs locally on a desktop that would like to ran on Amazon ECS P2 or G2 that can on the fly spin up instances dumps some output and shuts-down when done.
I have few questions:
- Does this on fly spin-up instances work at all? Both Windows and Linux ?.
- Do I need to log into each spin-up instances and execute the application manually?
- The app needs to read input data and dump output file and I wonder how is this handled on Amazon?
Any good pointers is very much appreciated.

You can install/launch your code at the EC2 startup via custom script, e.g. it can read data or code from S3 under the same account. It's under Configure Instance Details->Advanced Details. I'm sure you can script it using aws cli too.

Related

Azure App Service - Copy folders between 2 App Services [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have 2 App services. How to copy a folder (includes sub-folders) from one App Service to another internally? The two App Services are in PHP 7.
NO, there is no native way to do that. It's similar to how you copy files from one server to another. Because KUDU also does not support FTP.
You have todo download the files from one App and upload it to the 2nd one again. You can tools like WinSCP to do that. Also backup and restore is another option.

Big Data integration testing best practice [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking around for some resources on what best practices are for a AWS based data ingestion pipeline that is using Kafka, storm, spark (streaming and batch) which read from and write to Hbase using various micro services to expose the data layer. For my local env I am thinking of creating either docker or vagrant images that will allow me to interact with the env. My issue becomes as to how to standup something for a functional end to end environment which is closer to prod, the drop dead way would be to have an always on environment but that gets expensive. Along the same lines in terms of a perf environment it seems like I might have to punt and have service accounts that can have the 'run of the world' but other accounts that will be limited via compute resources so they don't overwhelm the cluster.
I am curious how others have handled the same problem and if I am thinking of this backwards.
AWS also provides a Docker Service via EC2 Containers. If your local deployment using Docker images is successful, you can check out AWS EC2 Container Service (https://aws.amazon.com/ecs/).
Also, check out storm-docker (https://github.com/wurstmeister/storm-docker), provides easy to use docker-files for deploying storm clusters.
Try hadoop mini clusters. It has support for most of the tools you are using.
Mini Cluster

Azure memory size -- what gives? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
OK,
I am setting up my first Azure VM, the only images available are basically windows server.
Why are their servers so low on memory until you get to pretty big $$$?
Are there any of us who would straight faced tell a client that they should run a windows server with .75 GB of ram?
Can I run basic applications on the small machines or should I not waste my time?
Thank you,
Joe
not sure where you're looking at, but there's definitely more than Windows images in there (Ubunto, CoreOs, CentOS, Suse...)
Not to mention that you also have the VM Depot
extra small instances make good for some light load like acting as a witness in HA setups, or even running small web sites...
depends on what you run on it.
you'd know better what your app requires.

AWS and Niginx server relationship [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to know how there exists this relationship
There is a AWS-server
Each AWS-Server has instances namely 1,2,3.....n
Each instances i can have many ports like 7001,7002.....etc
so,
Now if i use NodeJS server which is basically a Niginx server other
wise Apache
does this mean that a server is running on a server
Confused - Im confused !
Can someone clear this !
Basically:
AWS Server - this means a physical server, or a virtual machine running inside a physical server in a datacenter.
Ubuntu Server - it's an example of operating systems designed to run on servers (the meaning on point 1)
NodeJS, Nginx, Apache - pieces of software that fulfill a "Server" role in a "client-server" architecture. Generally, this software runs on "servers" (the meaning on point 1) but they can also run on any other computer system.

Amazon EC2 : getting an instance and run a script [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've looked at Amazon EC2 for some basic script I need to run sometimes.
My only issue at the moment is that I've done some test and read some documentation, but I still don't know if it's possible to do a script to get the instance (like rent it) then ssh, upload my script and run it.
From what I've looked it's possible to ssh, scp my script and run it, but I have no idea for the first step.
If someone has an answer or some example, it would be greatly appreciated.
Thank you.
Bdloul
You'll probably want to to start by launching an instance through the wizard by selecting launch instance on your ec2 console (console.aws.amazon.com > ec2).
Then check out the ec2 command line tools. You'll be able to check out instance, check the status of instances, ssh into them, scp up to them, and start your script. Check back when you hit a snag.

Resources