Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
There are lot of photo sharing applications out there, some are making money and some don't. Photo sharing takes lot of space, so I doubt where they host these! Rich services probably using Amazon or their own server, but the rest? Do they have access to any kind of free service? Or they have purchased terabytes from their web host?
AWS S3 is what you are generally referring to. The cost is mainly due to the reliability they give to the data they store. For photo-sharing, generally this much reliability is not required (compared with say a financial statement).
They also have other services like S3 RRS (Reduced redundancy), and Glacier. They are lot cheaper. Say those photos not accessed for a long time may be kept on Glacier (it will take time to retrieve, but cheap). RRS can be used for any transformed images (which can be re-constructed even if lost) - like thumbnails. So these good photo-sharing services, will do a lot of such complicated decisions on storage to manage cost.
You can read more on these types here : http://aws.amazon.com/s3/faqs/
There is also a casestudy of SmugMug on AWS. I also listened to him once, where he was telling about using his own hard-disks initially to store, but later S3 costs came down and he moved on to AWS. Read the details here:
AWS Case Study: SmugMug's Cloud Migration : http://aws.amazon.com/solutions/case-studies/smugmug/
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am new to AWS and I need to know certain things before start coding.
1)For Video streaming am planning to use Nodejs and S3 but the point is, if a video is 1.5GB, the time to fetch the video from s3 to node is very high. So if i use AWS EFS or EBS there wont be any necessity for the API call to S3. is EBS or EFS a reliable way to store huge media files?
2)if i use cloudfront plus s3. the cost is so high as i tried to calculate it in aws pricing calculator
I saw many blog and articles regarding this but they aren't useful to me. can someone who had experience in these, please suggest me whats the best service i can use in low cost also with some efficiency.
EFS is great because it centralizes the data so you can properly scale horizontally, but it has it's downfalls. Specifically that once you get into higher bandwidth usage you will almost certainly need to use static transfer rates to avoid burning through your burst credits.
EBS is nice because you are using the bandwidth of the instance itself, but it doesn't allow you to scale horizontally as easily.
Here's an interesting article I found when i was working on this
https://www.missioncloud.com/blog/resource-amazon-ebs-vs-efs-vs-s3-picking-the-best-aws-storage-option-for-your-business#:~:text=The%20main%20differences%20between%20EBS,of%20backups%20or%20user%20files.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
this is probably far fetched but... can spark - or any advanced "ETL" technology you know - connect directly to sql server's log file (the .ldf) - and extract its data?
Agenda is to get SQL server's real time operational data without replicating the whole database first (nor selecting directly from it).
Appreciate your thoughts!
Rea
to answer your question, I have never heard of any tech to read an LDF directly, but there are several products on the market that can "link-clone" a database almost instantly by using some internal tricks. Keep in mind that the data is not copied using these tools, but it allows instant access for use cases like yours.
There may be some free ways to do this, especially using cloud functions, or maybe linked-clone functions that Virtual Machines offer, but I only know about paid products at this time like Dell EMC, Redgate's and Windocks.
The easiest to try that are not in the cloud are:
Red Gate SQL Clone with a 14 day free trial:
Red Gate SQL Clone Link
Windocks.com (this is free for some cases, but harder to get started with)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have a client that is very jealous about her data and she asked me to replace the default bot storage of my bot with a custom storage that saves all the data in an on-premises database.
If I replace the storage, will the bot framework save permanently any conversation data in any other place? (let's say, somewhere in Azure) That's something my client would like to avoid for security concerns.
Thanks!
Saving and loading of all session data is handled in the ChatConnector's getData() and saveData() unless you provided your own via settings.storage. In non-emulator real-life scenarios it will go to https://state.botframework.com/v3/botstate/...
The bot framework doesn't store anything else, I believe. I explored this exact question very recently. Take a look:
http://www.pveller.com/smarter-conversations-part-3-breadcrumbs/
http://www.pveller.com/smarter-conversations-part-4-transcript/
I had to read the source (many times actually) to trace the inner workings of the Bot Framework and I didn't see anything that would make me think that there's another persistence somewhere.
You are probably better off asking on the official support channel to confirm and assure your client but I think you're good.
As to how reasonable it is... companies do far more crazier things for all kinds of reasons :) By the way, will you also use Microsoft's LUIS for NLU? Does your client have similar concerns about all incoming messages going through that service? It's a deep rabbit hole. I think of engagement (vs. back office automation) bots as very much cloud-native. Not easy to shield yourself from it and yet benefit from all the new tech built for it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What costs less money when deployed on Amazon Cloud Node.js or Java Web Services?
Or when it does matter. We take into consideration only one way traffic (to server) for many clients.
They're both going to cost roughly the same in terms of hosting costs. In terms of development costs, however, things might be different:
Node is just Javascript -- it has a huge ecosystem and lots of new developers are using it -- since it's quite 'hip', it's easier to find people to hop onto new projects.
Java is old school and has been around forever, there are tons of 'senior' guys you can hire (for good $$).
Node is quite a bit faster to develop with. If you're building a small application, you might spend much less time developing it with Node than Java.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am developing a website which uses a lot of images.
The images get manipulated very often (every few seconds, by the clients). All images are on a linux server. It is also possible that two clients try to change an image at the same time.
So my question is: should I put the images into a database or just leave them in a folder (how does the OS handle the write-write-collisions?)?
I use node.js and mongoDB on the server.
You usually store the reference to the file location inside of the database. As far as write-write collisions In most whoever has the file open first gets it however it mostly depends on the OS that you are working with. You will want to look into file locking. This wikipedia article gives a good overview.
http://en.wikipedia.org/wiki/File_locking
It is also considered good practice in your code to check and notify the user if the file is in use if write collisions are likely to occur.
I suggest you store your images within the MongoDB using the GridFS file system. This allows you to keep images and their metadata together, it has an atomic update and two more advantages, if you are using replica sets for your database:
Your images have the same high availability as the rest of your data and get backed-up together
You can, eventually, direct queries to secondary members of the set
For more information, see
http://docs.mongodb.org/manual/applications/gridfs
http://docs.mongodb.org/manual/replication/
Does this help?
Cheers
Ronald