How to set non-random file names on imgur.com - imgur

How can I set images to have non random file names on imgur.com?
When I upload an image, a random string gets assigned to it.
How would I link to an image like username.imgur.com/meaningful_file_name.jpg instead of username.imgur.com/6dtgw.jpg?
Flickr does a similar thing. This can't be done for the sake of creating unique file names, as the username can provide that.

Upload the image. Get the image URL, such as http://i.imgur.com/GM9kb.png
Add a slash to the direct URL, followed by whatever string you want, such as http://i.imgur.com/GM9kb.png/Caturday.png
????
Profit!
P.S.: I found this out by myself...

I think your best bet is to sign up (or log in if you already have an account) to Reddit and post your question in the Imgur subreddit there. MrGrim (the Reddit username of Imgur's creator) regularly checks in there. And even if he doesn't answer, I'm sure the other Redditors subscribing to that subreddit will be able to tell you.

Related

how to download an original image or video with the baseUrl of Google Photos API?

I want to use the REST Google Photos API to download original photos or videos from Goolge Photos, and I found there is no way to achieve it with the "baseUrl".
I have checked the following pages, but there is not a definitive answer:
https://issuetracker.google.com/issues/112096115
https://issuetracker.google.com/issues/80149160
So if there is indeed a way to get the original photos and videos or if there will be one?
The addition of '=d' will not give you the original file! I tested it. The quality and resolution of the image seems to match the original one, but some information like exif metadata (geo location) is missing. As a result, the file size is also smaller than the original. This makes is not usable for backup synchronization where I want the original file.
Actually, I expect from google that they give me automated access to my own original data. It looks like that is currently not the case.
I'm afraid there are currently only two options how to get the original fotos:
Manual download on Google Fotos
Manual download via Google Takeout
Very disappointing!
So I just read through the issue tracker answers you provided, and I noticed that one reply was to add '=d'to the baseUrl.
So example: GET https://lh3.googleusercontent.com/lr/AGb3...HG2n=d

Removing ids from url [duplicate]

Hey guys! Working on a new Cake app and wondering if there is anyway for me to remove the ID-in-URL routing from Cake. Perhaps by passing the ID in POST somehow? Having the ID passed in as a URL param just seems really shoddy and unsafe. Thanks!
"Shoddy"? It's standard practice and a perfectly fine solution to have ids in the URL. Look at the URL of your question:
http://stackoverflow.com/questions/4638262/removing-id-from-cakephp-url
^^^^^^^
id
Also, there's absolutely nothing unsafe about showing an id in a URL. It's just a number that doesn't mean anything. If a user can do something "bad" only by knowing this id, your app is broken and insecure, not the id-passing mechanism.
Trying to work around this scheme means working around the fundamental principle of the HTML protocol and opens up a whole new can of worms.
Some people prefer using slugs instead of primary key ids. This is the removing-id-from-cakephp-url part of the URL from this page. Take a look at the SluggableBehavior.
However, slugs can change. Hence, having the primary key in your URL is useful if you want to have a permalink. StackOverflow does both so that it can support both permalinking from other sites, as well as for SEO reasons. :)
Regarding security issues, I guess the other answers have already pointed out that there are other ways to make your application secure.
Why do you care? URL-s are optimized for SEO reasons, an ID won't matter if it's ain't too long. If the latter, consider using a shorter one with numbers and letters in them instead, it will be as difficult to guess as a long one with just numbers.
If you are not using GET and you do not supply the params in the URL, your users won't be able to copy-paste the location.

Postfix - all attachment extraction at the server side

How can I extract all the attachments coming to the postfix email server? in order to save it to another (efficient) file server. I don't want to log in for every user and extract the attachments for the individual users but extract all the attachments (as file objects or file stream) as it comes to the postfix server and save it. While extracting, the routine should know which user an attachment belong but it should not go by user way. I specially want to avoid the login session/cycle.
As the second option - if I can get a push notification about a users attachment as it arrives. I am sure there are ways to do that - please let me know best ways to do it. And then extract attachment(s) for the user whose email with attachment(s) has just arrived. Still, I don't want log-in/out cycle to extract. It has to be done such a way that no individual password would be necessary.
Guessing lot of solutions would come in in python, thats great though. It would be also helpful if I get some NodeJS solution to do this.
Please help, and don't mix your solution with these two options - one at a time solution please - either one :)
Have a thorough google for postfix, procmail and uudeview - That combo will allow you to save attachments on a file server.
As a starting point have a look here:
https://kuther.net/howtos/howto-receive-mail-and-save-attachment-fetchmail-procmail-and-metamail
...and here...
http://www.mugginsoft.com/content/postfix-automatically-decode-zipped-attachments
HTH
Regards
Frank

How can I make clean search urls?

If I have search that has a lot of different options, then url becomes very long and looks very bad. Is there anyway to make urls look better? Using POST to make search would keep urls clean, but people couldn't share search urls.
Try doing an advanced search with many options on Google: the URL is long and not especially human-readable. I really don't think that's a problem; I don't think many people read URLs often. If you expect people to share search results, then show a button on the search results page that will generate a tinyURL-style shortened URL for that particular query.
A POST is meant for something that changes server state (e.g. a database update) and really shouldn't be used for a search.
You can encode all of your search criteria into something like a hash and then have a single parameter in your querystring that has that value:
http://www.mysearch.com/query=2esd32d2csg3fasfdlkjSDDFdskjsEWFsDFFR39fdf
I'm not sure exactly how you'd encode everything, but it wouldn't be too difficult.
Do the different options actually need to be in the URL? For example, a quick search from my Firefox search window gives a URL like:
http://www.google.com/search?q=search&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
If I'm sending the link to anyone, I habitually cut off everything after q=search. Why not have the URL be the bare minimum that you need to send the link to someone (or bookmark), and make the rest as invisible POST variables?

What web photo gallery software meets all my pernickety requirements?

I have a collection of photographs (about 30,000) which I'd like to put online. I've tried doing this before, over the years, with static image galleries, applications such as Gallery2, and self-rolled scripts. None have worked that well, as my requirements are fiddly, but it still seems like this should be a solved problem.
My photos are currently organised into folders named YYYY-MM-DD short album title, using Digikam.
I need a system that:
Is Free software, is essentially feature-complete, and has an active developer community.
Allows new photos and albums to be added and updated automatically with little more manual intervention than rsyncing the source directory on my computer to the web server, and rescanning.
Allows visitors to leave comments
Allows re-captcha or equivalant spam filtering and bulk moderation of these comments.
Reads tags from the IPTC Keywords field.
If it finds a tag named "friends", requires the user to enter a password to view.
If it finds a tag named "family", requires the user to enter a different password to view.
If it finds a tag named "private", does not display the photo at all, or even better, does not upload it to the live web server.
Reads descriptions from the IPTC Caption field.
Creates sane permalinks, e.g. http://example.com/2009/03/28/shortalbumtitle/IMG_0001.jpg
I acknowledge that I may be asking for something that doesn't exist, but I hope it does.
I acknowledge that answers may be something like "use Django and code the bits that don't already exist yourself", in which case do you have any tips? :)
Thanks.
Use Django and code the bits that don't already exist yourself.
Seriously. I was going to write that and was tempted not to when I saw you'd written it yourself, but it really does make the most sense if you have any familiarity with it!
I'd start with django-photologue 2. Get a basic gallery with tagging and comments working. You'll need a couple of pl's optional dependencies.
Then I'd write a custom import wrapper that allows you to rsync to a dir and update your library.
Comments are handled internally (through photologue, I think) but if not, there are plenty of comment apps that "just work". There is a recaptcha script that works as just another form field.
PIL can read IPTC
The URL structure is up to you =)
I'm finally getting around to doing this. I'm using a local python script to extract image metadata (tags, captions and timestamp) using pyexiv2, then rotate the image according to its EXIF orientation tag if appropriate, using PIL, and export a hierarchy of files to a temporary directory, where rsync uploads it to my host, and a remote python script (actually a Django app) imports the metadata into a Django DB.

Resources