what I'm trying to do is a website for a photography contest.
I'm using Node.js, Express, MongoDB and Mongoose. I've already managed to allow users to register using MongoDB.
What I am missing now is the space to store the photographs (weight ranging from 5mb to 50mb), and I am looking for a preferably free method of doing this.
I thought of an upload to Google Drive via the API (due to the free 15gb) but I want something that is automatic, I don't want the user to enter credentials or anything else, I want the server to take care of everything by sending only a confirmation that everything went well. From what I understand through the Google API it would always be a question of requesting an authorization and access to Google, which I do not want.
I don't know if I misunderstood and if there is a way to do it through Google in the way I mean but, if there wasn't, any online storage would be fine.
Sorry for poor english. Thank you.
For what you're trying to do, I think Firebase Storage may be the best fit:
https://firebase.google.com/pricing
You 10 GB free storage, and it's probably the easiest way to do it. Here's some stuff to get started:
https://firebase.google.com/docs/storage
Best of luck!
Related
I am developing a web app kind of like canva so I have design images I need to store. Is the best way to store them on s3 just to manually upload each design, make the bucket public, and input the url to each image in my web app?
I ask this question because
I don't know if just making the bucket public is standard practice.
Since the user will be repeatedly loading the main page with all the designs, the images will be constantly reloading. That's why I say repeatedly in the title. Is there a way to better handle the images so it doesn't constantly request?
What I've tried- I've looked at the documentation, but honestly I do not like the AWS documentation. In my opinion it doesn't give clear answers to questions like these. I've looked for other stack overflow questions, and I could not find a lot clearly discussing this either.
Let me know if I'm not being clear on anything. Obviously I am not well versed in image storage or anything in that realm so any advice would be greatly appreciated.
Its fine to make such buckets public with permissions like u can add domain specific permission and keep it public.
If you directly serving from S3, at some point it will cost u more. Provision cloudfront with S3 to serve images. AWS already provides image handler, you can find in cloud formation templates
I'm a novice mturk user. I created HITs for crowdsourcing using external question hosted on a server. I wanted to know if there is a web interface where I can see progress of my HITs. I tried looking at the https://requester.mturk.com/manage and https://requestersandbox.mturk.com/manage. But I cannot see the HITs programatically created using boto3. Should I look somewhere else? If not what's the way to get this information?
I share your pain right now. As of June 2020, this situation hasn't changed. HITs that are NOT created through the MTurk web interface STILL do not display on the web interface. It's terrible. We have 3 options for seeing and managing the HITs:
Use scripting and boto3. <-- Best option for now.
Use the AWS CLI.
Use the AWS shell (aws-shell).
I think the best option is to make scripts that do exactly what you need. Chances are you'll need to do things more efficiently than you could using the AWS CLI only. aws-shell isn't easy enough to use, and it also looks unsupported for over a year at this point (judging by their official github issue tracker).
For what you're asking specifically you'll need to use the method list_hits() and possibly list_assignments_for_hit(). See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mturk.html
Also I'm very new to this, so if it sounds like a barely or only sorta know what I'm talking about, that's correct. But I also wished there had been a straightforward answer to this question a couple weeks ago when I was sitting here dumbfounded.
So, I want to make a leveling/xp system for my discord bot (like mee6 or tatsumaki) but the only way I know how to do this is by using mSQL. Is there a way to do this just using discord.js or is there an eazier way to do this?
I'm sorry for this question being so general but i can't find an answer anywhere, thanks
You could, though using a DB will help more in the future.
Using a Database will probably be the only solution unless you want to write files uselessly or want the levels to be cleared upon restart. From my experience, a database will just work best if you want to store anything like this. Also when using a Database you can use other tables to save more information (Command statistics, etc.) without a problem.
I've been there myself, though once you get over not wanting to use a database and setting one up you'll wonder how you lived without it.
I'm using a point system on my bot. I'm saving it on a JSON file, it's pretty easy to do with node.
You can scan all the users every time you launch the bot for new users and initialize them in your file.
The downside is that you can erase all of the file if you parse it when you boot the bot and you get an error.
I'm considering switching to a DB instead.
I am trying to create a complete session managment in nodejs for logins, chat sessions etc.
I googled a lot and every solution that i got was with some framework/module. I don't want to use any module/framework. I would rather like to build my own solution for this:
So this is the plan:
I will set a session cookie on the client machine (yet to figure out how)
For each cookie, i will be maintaining a unique id in the database instead of files as is the case with php (i am using mongodb)
When a user opens the application, a cookie will be set, a entry will be made in database and corresponding information from the db will be fetched.
I am yet to lay a concrete plan for this. I wanted to know whether doing it this way is a good idea? i read somewhere....'Real men don't use any framework. They make everything on their own' :P
Please correct me if i am on a wrong direction. M just starting with these things....
I'm not aware of any node.js frameworks that are closed-source. Just pick one that seems to do what you want to do, download it, and study the source code to see how the developer implemented it. Then come up with your (perceived) improvement on how they did it. You'll probably find that implementing session management involves a whole bunch of nitpicky details that were never obvious to you.
Ignore all the above advice if this is a school assignment where you're not allowed to look at related code. If that's the case, I pity you because you have an incompetent teacher.
Reading through the Flickr API documentation it keeps stating I require an API key to use their REST protocols. I am only building a photo viewer, gathering information available from Flickr's public photo feed (For instance, I am not planning on writing an upload script, where a API key would be required). Is there any added functionality I can get from getting a Key?
Update I answered the question below
To use the Flickr API you need to have an application key. We use this to track API usage.
Currently, commercial use of the API is allowed only with prior permission. Requests for API keys intended for commercial use are reviewed by staff. If your project is personal, artistic, free or otherwise non-commercial please don't request a commercial key. If your project is commercial, please provide sufficient detail to help us decide. Thanks!
http://www.flickr.com/services/api/misc.api_keys.html
We set up an account and got an API key. The answer to the question is, yes there is advanced functionality with an API key when creating something like a simple photo viewer. The flickr.photos.search command has many more features for reciving an rss feed of images than the Public photo feed, such as only retrieving new photos since the last feed request (via the min_upload_date attribute) or searching for "safe photos" only.
If you have a key, they can monitor your usage and make sure that everything is copacetic -- you are below request limit, etc. They can separate their stats on regular vs API usage. If they are having response time issues, they can make response a bit slower to API users in order to keep the main website responding quickly, etc.
Those are the benefits to them.
The benefits to you? If you just write a scraper, and it does something they don't like like hitting them too often, they'll block you unceremoniously for breaking their ToS.
If you only want to hit the thing a couple of times, you can get away without the Key. If you are writing a service that will hit their feed thousands of times, you want to give them the courtesy of following their rules.
Plus like Dave Webb said, the API is nicer. But that's in the eye of the beholder.
The Flickr API is very nice and easy to use and will be much easier than scraping the feed yourself.
Getting a key takes about 2 minutes - you fill in a form on the website and then email it to you.
Well, they say you need a key - you need a key, then :-) Exposing an API means you can pull data off the site way easier, it is understandable they want this under control. It is pretty much the same as with other API enabled sites.