How do I load an image uploaded to S3 in my website? - node.js

I'm sure this question has been debated a lot. But I searched for some hours and I don't seem to find a solution to this. Most guides are old or I can't make them work when I try.
I'm working in a Nodejs + React application and I'm using S3 to store images. I want the React app to load the image, so I need to provide it a URL in the future. I have worked with S3 in the past (and RoR) and I remember I used to create a temporary URL and it was refreshed from time to time.
Tutorials I found say I have to create a URL like this:
https://<bucket>.s3.amazonaws.com/<image>
So, not a temporary URL, and it doesn't work either way. I get a "Access Denied" xml in return. I even have granted permissions in this file (I selected "Make it public") but it doenst seem to work.
So how should I load this image in React? How do I retrieve it from the service? I'm using the official SDK.

upload your image as public in s3 and check that the image is given public rights by copying and pasting URL returned by s3 in web browser and if image renders then use the image tag in below mentioned way
Insert your image url of S3 bucket in src
<img src = "https://yourS3Url/image.jpg" alt = "HTML IMAGE"/>

Try this way
Add these policy to the S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::BUCKETNAME/*"
}
]
}
Then add these cors config
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>9000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
upload files and then you can access file
<img src = "https://yourS3Url/image.jpg" />

What is needed is a 'Pre-signed URL'. Making buckets public isn't recommended.
In terms of NodeJS is something like:
s3Client.getSignedUrl('getObject', {
Bucket: bucket,
Key: key,
Expires: timeInSeconds
});

Related

Microsoft Graph create share link for specific people

I would like to share document by link in sharepoint from microsoft graph code. Default behaviour is that every person who has link can see this file. I want to make this link working just for specific people.
So my code look like this:
Permission permission = await _graphClient.Sites[_options.SiteId]
.Drives[driveId]
.Items[itemId]
.CreateLink("view", "organization")
.Request()
.PostAsync();
This create share link for all people in organization. Now I would like to grant permissions (https://learn.microsoft.com/en-us/graph/api/permission-grant?view=graph-rest-1.0&tabs=csharp)
await graphClient.Shares["{sharedDriveItem-id}"].Permission
.Grant(roles,recipients)
.Request()
.PostAsync();
But I have no idea what should be in place "{sharedDriveItem-id}". When I put there itemId it doesn't work. Also if I put there permission.link.webUrl it also doesn't work.
What am I doing wrong?
From this documentation.
Once you create the shared link the response object returns an id, that's what you should use in place of the {sharedDriveItem-id}. See a similar response object below.
HTTP/1.1 201 Created
Content-Type: application/json
{
"id": "123ABC", // this is the sharedDriveItem-id
"roles": ["write"],
"link": {
"type": "view",
"scope": "anonymous",
"webUrl": "https://1drv.ms/A6913278E564460AA616C71B28AD6EB6",
"application": {
"id": "1234",
"displayName": "Sample Application"
},
},
"hasPassword": true
}
Okey, I found solution. There are few steps:
As sharedDriveItem-id I used encoded webUrl following by this instruction https://learn.microsoft.com/en-us/graph/api/shares-get?view=graph-rest-1.0&tabs=http
When I was creating link (https://learn.microsoft.com/en-us/graph/api/driveitem-createlink?view=graph-rest-1.0&tabs=http) in place scope i put "users"- there is no option like that in documentation but without that it doesn't work
I added Prefer in header https://learn.microsoft.com/en-us/graph/api/driveitem-createlink?view=graph-rest-1.0&tabs=http
I was using clientSecret/clientId authorization so I had to add azure app access to Sites.Manage.All and Sites.FullControl.All in Graph Api Permissions
Everything works If you using Microsoftg.Graph nuget in newest version (4.3 right now if I remember correctly)

How to load dynamic image tui-image-editor?

hello All,
I am using tui-image-editor npm. I want to open my editor in bootstrap model with dynamic images.
I am getting this errer
Access to Image at 'https://bucke_test.s3.amazonaws.com/5e4cf329adb6054a45a8203a/REN_3018.jpg' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.
webpack://tui.ImageEditor/./src/js/invoker.js?:214 Uncaught (in promise) The executing command state is locked.
I already set cors permission at my s3 bucket.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
<i className="fa fa-pencil cursor-pointer" aria-hidden="true" onClick={(e)=> this.openImageEditorModel("dynamicimageurl")}></i>
openImageEditorModel = (imageURL) =>{
document.getElementById("openImageEditor").click();
imageEditor = new ImageEditor('#tui-image-editor', {
includeUI: {
loadImage: {
path: imageURL,
name: 'Blank'
},
uiSize: {
width: '900px',
height: '500px'
},
theme: blackTheme,
menu: ['draw', 'text'],
initMenu: 'draw',
me[![enter image description here][1]][1]nuBarPosition: 'right'
},
cssMaxWidth: 600,
cssMaxHeight: 400
});
imageEditor.loadImageFromURL(imageURL, 'My sample image')
}
Maybe you can try installing this extension in the chrome browser. I have tried it and it works on localhhost environment
Allow CORS: Access-Control-Allow-Origin. https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf?hl=en
Unfortunately, you have jut discovered that standard browsers will not let you access image data in any meaningful way if it violates the CORS policy of the servers that served them. TUI Image Editor can't do anything to fix this.
You have two options:
If you control the image server (bucke_test.s3.amazonaws.com), then you should be able to set the Access-Control-Allow-Origin header to the appropriate value. Here are the instructions you need to follow for S3 specifically. For other servers/services, use a search for the header with the server name.
If you don't control the image server but control your own server or server-side web application, set up an HTTP endpoint that reverse proxies the image server. This way, the remote images will appear to be loaded from your origin and the image serrver's CORS policies won't affect you. I don't know what server you are using for your application, but all standard web servers and web application frameworks should be able to proxy requests. Just be careful and lock down the end point so that it doesn't get abused.

Make a S3 resource public read-only for some duration using the AWS SDK for node?

I have integrated S3 to my node app and upload certain documents on to S3. Need to share these documents with a third party.
Send the URLs to the third party via an API and they will download it immediately. I want to make the S3 objects public for some duration. How do i achieve this ?
S3 presigned URL is what you are looking for, you can use them to generate links for your third party with expiration time. I'm posting the reference links below
Presigned url AWS official documentation
A blog post which demonstrate presigned urls further
With #varnit's help i was able to figure out creating a public URL for certain duration.
I had another issue, get KEY from the resource URL.
Resolved that in Node using this:
AmazonS3URI(resourceUrl)
which returns something like this:
{
"uri":{
"protocol":"https:",
"slashes":true,
"auth":null,
"host":"bucket.s3.region.amazonaws.com",
"port":null,
"hostname":"bucket.s3.region.amazonaws.com",
"hash":null,
"search":null,
"query":null,
"pathname":"/private/region%xxxxxx/file",
"path":"/private/region%xxxxx/file",
"href":"https://bucket.s3.region.amazonaws.com/private/region%xxxxxx/file"
},
"isPathStyle":false,
"bucket":"aaaa-bbbb-ccc",
"key":"private/xxxxx:yyyyy/file",
"region":"region"
}

Adding proper permissions to AWS S3 bucket to allow SEO

I'm trying to verify my site for SEO purposes with Google using https://www.google.com/webmasters/tools/home?hl=en. I am using AWS S3 to host my content, and AWS Cloudfront to serve it through the CDN. I'm following this checklist: http://www.wikihow.com/Get-Your-Website-Indexed-by-Google and am on Step 4.
The steps Google lists to verify are:
Download this HTML verification file. [googlelongstringofcharacters.html]
Upload the file to https://www.dynamicdentaledu.com/
Confirm successful upload by visiting https://www.dynamicdentaledu.com/googlelongstringofcharacters.html in your browser.
Click Verify below.
To stay verified, don't remove the HTML file, even after verification succeeds.
I've added the HTML file to my site's root. When I click confirm in step 3, I get:
So I skipped that and clicked Verify button in step 4. Google says:
Verification failed for https://www.dynamicdentaledu.com/ using the
HTML file method (less than a minute ago). We were unable to connect
to your server.
I think this is due to the permissions and bucket policies I have the S3 bucket. They are, respectively:
And
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dynamicdentaledu.com/*"
}
]
}
How can I enable Google to access what it needs?
EDIT: following AWS's bucket policies, I changed the policy to:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dynamicdentaledu.com/*"
}
]
}
Am now getting:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>google*longstringofcharacters*.html</Key>
<RequestId>42DD1F1F0D5E06F7</RequestId>
<HostId>
zbmsLAEMz3ed2zKx3gKCHjrtHxeWmaLl16JJs6012zFcLQdnMH48mFJY1YOETD3WMS/8NwkU3SY=
</HostId>
</Error>
You have three issues.
CloudFront will return errors to the browser for 5 minutes after you fix the problem, by default. When the origin server returns an error, usually there is no reason for CloudFront to continually retry. In a case like this, you may want to reconfigure the Error Caching TTL for 403 and 404 errors to 0 seconds in CloudFront. See my answer to Amazon CloudFront Latency for further explanation of this.
You did not need to change your bucket policy. If your site was otherwise working and you uploaded this new object with the "make everything public" option selected (equivalent to setting x-amz-acl: public-read if using the API) then that should have been sufficient, though the 5 minute timer mentioned above could have complicated your troubleshooting process. Note also that in your bucket permissions, you are allowing Everyone to List the contents of your bucket. This is not actually causing the problem, here, but is a configuration that is potentially too permissive and needs to be mentioned. This option allows anyone to download a complete list of all your files, which seems like a bad idea in most cases.
You didn't upload the file with the correct name. <Code>NoSuchKey</Code> is never returned for any reason other than, simply enough, there is no object with this key (path/filename.ext) in the bucket. It cannot be caused by policy, permissions, ACL, etc. Check in the S3 console: the file is not named as you intended, or is not in the right place, at the root of the bucket. The long string of characters is not, as far as I am aware, a secret value -- only an obscure/unpredictable value -- so if the information here doesn't help you resolve this, showing a screen shot of the console including this object and its properties should not pose any security issue for you. This may be necessary for further troubleshooting, should that be required.

Rewrite URLs in CouchDB/PouchDB-Server

If it is possible, how would I achieve the following URL rewrites using PouchDB Server?
At /index.html, display the HTML output of /index/_design/index/_show/index.html.
At /my_database/index.html, display /my_database/_design/my_database/_show/index.html.
My aim is to use PouchDB (and eventually CouchDB) as a stand-alone web server.
I am struggling to translate the rewrite documentation into working code.
Apache CouchDB uses an HTTP API and (consequently) can be used as a static web server--similar to Nginx or Apache HTTPD, but with the added bonus that you can also use MapReduce views, replication, and the other bits that make up Apache CouchDB.
Given just the core API you could store an entire static site as attachments on a single JSON document and serve each file from it's own URL. If that single document is a _design document, then you get the added value of the rewriter.
Here's an example faux JSON document that would do just that:
{
"_id": "_design/site",
"_attachments": {
"index.html": {
"content_type": "text/html",
"data": "..."
},
"images/logo.png": {
"content_type": "image/png",
"data": "..."
},
"rewrites": [
{
"from": "/",
"to": "index.html"
}
]
}
The actual value of the "data": "..." would be the base64 encoded version of the file. See the Creating Multiple Attachments example in the CouchDB Docs.
You can also use an admin UI for CouchDB such as Futon or Fauxton--available at http://localhost:5984/_utils--both of which offer file upload features. However, those systems will require that the JSON document exist first and will PUT the attachment into the database directly.
Once that's completed, you can then setup a virtual host entry in CouchDB (or Cloudant) which points to the _rewrite endpoint within that design document. Like so:
[vhosts]
example.com = /example-com/_design/site/_rewrite/
If you're not hosting on port 80, then you'll need to request the site at http://example.com:5984/.
Using a _show function (as in your example) is only necessary if you're wanting to transform the JSON into HTML (or different JSON, XML, CSV, etc). If you only want static hosting, then the option above works fabulously. ^_^
There are also great tools for creating these documents. couchapp.py and couchdb-push are the ones I use most often and both support the CouchApp filesystem mapping "spec".
Hope that helps!

Resources