I have two image files uploaded to firebase storage:
capsule house.jpg was uploaded through the UI (clicking the Upload file button).
upload_64e8fd... was uploading from my backend server (node.js) using this:
const bucket = fbAdmin.storage().bucket('gs://assertivesolutions2.appspot.com');
const result = await bucket.upload(files.image.path);
capsule house.jps is recognized as a jpeg and a link to it is supplied in the right hand margin. If I click on it, I see my image in a new tab. You can see for yourself:
https://firebasestorage.googleapis.com/v0/b/assertivesolutions2.appspot.com/o/capsule%20house.jpg?alt=media&token=f5e0ccc4-7916-4245-b813-dbdf1838556f
upload_64e8fd... is not recognized as any kind of image file and no link it provided.
The result returned on the backend is a huge json object with the following fields:
"selfLink": "https://www.googleapis.com/storage/v1/b/assertivesolutions2.appspot.com/o/upload_64e8fd09f787acfe2728ae73158e20ab"
"mediaLink": "https://storage.googleapis.com/download/storage/v1/b/assertivesolutions2.appspot.com/o/upload_64e8fd09f787acfe2728ae73158e20ab?generation=1590547279565389&alt=media"
The first one sends me to a page that says this:
{
"error": {
"code": 401,
"message": "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.",
"errors": [
{
"message": "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.",
"domain": "global",
"reason": "required",
"locationType": "header",
"location": "Authorization"
}
]
}
}
The second one gives me something similar:
Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.
The rules for my storage bucket are as follows:
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if true;
}
}
}
I'm allowing all reads and writes.
So why does it say I don't have access to see my image when it's uploaded through my backend server?
I'd also like to know why it doesn't recognize it as a jpeg when it's uploaded through my backend server, but it does when uploaded through the UI, but I'd like to focus on the access issue for this question.
Thanks.
By default, the files are uploaded as private, unless you change your bucket settings, as mentioned here. The below code is an example of how to change the visibility of your documents.
/**
* {#inheritdoc}
*/
public function setVisibility($path, $visibility)
{
$object = $this->getObject($path);
if ($visibility === AdapterInterface::VISIBILITY_PRIVATE) {
$object->acl()->delete('allUsers');
} elseif ($visibility === AdapterInterface::VISIBILITY_PUBLIC) {
$object->acl()->add('allUsers', Acl::ROLE_READER);
}
$normalised = $this->normaliseObject($object);
$normalised['visibility'] = $visibility;
return $normalised;
}
You can check how to set that via console, following the tutorial in the official documentation: Making data public
Besides that, as indicated in the comment by #FrankvanPuffelen, you won't have a generated URL for the file to be accessed. You can find more information about it here.
Let me know if the information helped you!
The other answer helped me! I have no idea why the Console had me make those security rules if they won't apply...
Based on nodejs docs (and probably other languages) there is a simple way to make the file public during upload:
const result = await bucket.upload(files.image.path, {public: true});
This same option works for bucket.file().save() and similar APIs.
Related
I'm trying to use the Microsoft Graph API to write calendar events within my company.
First of all let me give you a little bit of context.
I'm building a node API that uses Microsoft Graph to write calendar events, so I configured my application inside the Azure Active Directory with the following application permission
I granted administrator consent as you can see from the picture.
I was also able to get the access token using msal-node
const graphToken = async () => {
const azureConfig = {
auth: {
clientId: process.env.CLIENT_ID,
authority: `https://login.microsoftonline.com/${process.env.TENANT_ID}`,
clientSecret: process.env.CLIENT_SECRET,
},
}
const tokenRequest = {
scopes: [process.env.GRAPH_ENDPOINT + '/.default'],
}
const cca = new msal.ConfidentialClientApplication(azureConfig)
const authRespose = await cca.acquireTokenByClientCredential(tokenRequest)
if (authRespose) {
return authRespose.accessToken
}
return null
}
The only thing that sounds me a little odd, is the scope set to [process.env.GRAPH_ENDPOINT + '/.default'] I tried to change it ex. [process.env.GRAPH_ENDPOINT + '/Calendar.ReadWrite'] but it fires an excepion.
The next thing I'm able to do is retrive all calendars a user have right to write to, using the following Graph endpoint:
https://graph.microsoft.com/v1.0/users/user#example.com/calendars
Now the issue, when I try to do a POST request to write a calendar event for example
POST https://graph.microsoft.com/v1.0/users/{userId}/calendars/{calendarId}/events
{
"subject": "Test",
"body": {
"contentType": "HTML",
"content": "Test"
},
"start": {
"dateTime": "2022-11-09T16:00:00",
"timeZone": "Europe/Rome"
},
"end": {
"dateTime": "2022-11-09T17:00:00",
"timeZone": "Europe/Rome"
}
}
Note that calendarId is one of the id's from the previous call
(Not the default calendar of userId)
I got a 403 Forbidden with the following response
{
"error": {
"code": "ErrorAccessDenied",
"message": "Access is denied. Check credentials and try again."
}
}
I also decoded my token to see if I get some info on the root cause of the 403 error, I found this:
...
"roles": [
"Calendars.Read",
"User.Read.All",
"Calendars.ReadWrite"
],
...
It seems correct to me.
I don't get if it is a scope issue, an authentication issue or something I'm missing, can someone pinpoint me in the right direction?
Thanks in advance
Basically it was my fault.
I messed up with calendar permissions and my test user had a reviewer permission instead of an author one on the calendar I had to write to
once I was able to identify this issue and change the permission the call response was what expected.
I leave this answer as a reference for anyone that encounter this issue
Thanks anyway
I'm attempting to refactor the "Node.JS PowerBI App Owns Data for Customers w/ Service Principal" code example (found HERE).
My objective is to import the data for the "config.json" from a table in my database and insert the "workspaceId" and "reportId" values from my database into the "getEmbedInfo()" function (inside the "embedConfigServices.js" file). Reason being, I want to use different configurations based on user attributes. I am using Auth0 to login users on the frontend, and I am sending the user metadata to the backend so that I can filter the database query by the user's company name.
I am able to console.log the config data, but I am having difficulty figuring out how to insert those results into the "getEmbedInfo()" function.
It feels like I'm making a simple syntax error somewhere, but I am stuck. Here's a sample of my code:
//----Code Snippet from "embedConfigServices.js" file ----//
async function getEmbedInfo() {
try {
const url = ;
const set_config = async function () {
let response = await axios.get(url);
const config = response.data;
console.log(config);
};
set_config();
const embedParams = await getEmbedParamsForSingleReport(
config.workspaceId,
config.reportId
);
return {
accessToken: embedParams.embedToken.token,
embedUrl: embedParams.reportsDetail,
expiry: embedParams.embedToken.expiration,
status: 200,
};
} catch (err) {
return {
status: err.status,
error: err.statusText,
}
};
}
}
This is the error I am receiving on the frontend:
"Cannot read property 'get' of undefined"
Any help would be much appreciated. Thanks in advance.
Carlos
The error is because of fetching wrong URL. The problem is with the config for the Service Principal. We will need to provide reportId, workspaceId for the SPA and also make sure you added the service principal to workspace and followed all the steps from the below documentation for the service principal authentication.
References:
https://learn.microsoft.com/power-bi/developer/embedded/embed-service-principal
I am new to development in teams and botkit. There is a bot that is up and running on Teams.
I want to share a file generated by the bot to the user(send a file from bot to the user) on teams. I have read the Microsoft-teams document. According to which first step is to send a Message requesting permission to upload which I am able to complete successfully. Below is the code, I have used to show the card to the user to ask for permission.
controller.hears('download', ['message_received', 'direct_message', 'direct_mention'], function (bot, message) {
var reply = { text:"" ,attachments: [] }
var ticketObj = {
"contentType": "application/vnd.microsoft.teams.card.file.consent",
"name": "result.txt",
"content": {
"description": "Text recognized from image",
"sizeInBytes": 4348,
"acceptContext": {
"resultId": "1a1e318d-8496-471b-9612-720ee4b1b592"
},
"declineContext": {
"resultId": "1a1e318d-8496-471b-9612-720ee4b1b592"
}
}
}
reply.attachments.push(ticketObj)
bot.reply(message, reply)
})
According to the Microsoft-teams document, when the user will click on accept button, the bot will receive an Invoke activity with a location URL.
But, when I click on the accept, nothing goes to my bot. It shows the error message: "This card action is not supported".
How to provide support for this card action?
Adding answer from comment section for more visibility:
Issue is resolved now. The issue was of uploading the manifest.json.
To send and receive files in the bot, set the supportsFiles property
in the manifest to true. This property is described in the bots
section of the Manifest reference.
The definition looks like this, "supportsFiles": true. If the bot does
not enable supportsFiles, the features listed in this section do not
work.
https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/bots-filesv4#configure-the-bot-to-support-files
Sample Link:
https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/56.teams-file-upload
I'm building a tool with ReactJS where (logged in) users can upload images to some kind of gallery that they can access from there user account whenever they like.
I plan to upload the data directly to a cloud storage (most likely wasabi) and store them there.
What are best pratices to let users access images in a secure way? Of course the user that uploaded the images, should be able to see them in his gallery, but no other user. I mean I can of course only show the user associated images in the galleries, but i am concerned about data protection and also direct access to the images. Is it enough to store the files e.g. in a folder with a hashed name, so that no one can guess the image path? Or are some other restrictions necessary? I just don't see how I can additionally restrict access to the images to be displayed in the account without drastically degrading the user experience.
Happy to read your thoughts.
Thanks!
I had a similar issue where I couldn't simply give the URL of the image resource since those image resources were specific to certain users.
I had my API return a byte stream for the image & then in my react app consume it as follows
images(id: string, cancelToken?: CancelToken): Promise<FileResponse> {
let url_ = "pathtoendpoint/{id}";
let options_ = <AxiosRequestConfig>{
cancelToken,
responseType: "blob",
method: "GET",
url: url_,
headers: {
Accept: "application/octet-stream",
Authorization: "Bearer ezfsdfksf...."
},
};
return Axios.create().request(options_).then((_response: AxiosResponse) =>
{
return new Promise<FileResponse>(_response.data);
});
}
export interface FileResponse {
data: Blob;
status: number;
fileName?: string;
headers?: { [name: string]: any };
}
I then create an ephemeral DOM string containing a uri
let uri = URL.createObjectURL(data)
Once I have the uri then I can display it as so
<img src={uri} id="someId" alt="" />
By following this approach, I'm treating the image bytes as any other type of data, be it string or number or boolean. And, can secure that resource.
Here's what my server side controller method looks like, I'm using .NET core but you'll follow a similar approach with any other platform
[Produces("application/octet-stream")]
public async Task<ActionResult> GetImageById([FromRoute] Guid id)
{
var resource = await DatalayerService.GetImageAsync(id);
return File(resource.Filebytes, resource.ContentType);
}
We are using NodeJS for Rest API's and ReactJS for an App, trying to fetch the AWS s3 images from nodeJS using aws-sdk, then planning to place into the react, the thing is the AWS Bucket does not have public access, and it should not be a public access, how to solve this problem?
From the nodeJS, we are getting s3 listObjects, can we access the image using the below object from ReactJS?
We have read a few more docs, suggested to use a signed URL but will it work in the browser to display images to the clients?
{
"Key": "public/5db0476246e0fb0004r4rbff5/s3-c0c79f542f3c.jpg",
"LastModified": "2019-10-23T12:30:32.000Z",
"ETag": "\"269b2c5455h220bccc374f4f4rfee\"",
"Size": 510811,
"StorageClass": "STANDARD",
"Owner": {
"ID": "dad9f9dfk39dfijir93irjfiejfidjfjdfdfdfr3r3r3r3fef3"
}
}
You can put the bucket behind CloudFront CDN. Distribute your content using signed URLs / limit access to some origins/ and anything else that might fit your use-case.
My place of work uses Cloudfront with signed URLs for the same use case.
I think this AWS help doc would help more.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
//You can use this code to make an object public while uploading it.
const AWS = require('aws-sdk');
const fs = require('fs');
const path = require('path');
//configuring the AWS environment
AWS.config.update({
accessKeyId: "<Access Key Here>",
secretAccessKey: "<Secret Access Key Here>"
});
var s3 = new AWS.S3();
var filePath = "";
//configuring parameters
var params = {
Bucket: '<Bucket Name Here>',
Body : fs.createReadStream(filePath),
Key : "folder/"+Date.now()+"_"+path.basename(filePath),
ACL :'public-read'
};
s3.upload(params, function (err, data) {
//handle error
if (err) {
console.log("Error", err);
}
//success
if (data) {
console.log("Uploaded in:", data.Location);
}
});
Just add this to the call when you upload the image from your code. ACL :'public-read' (Ignore this if you dont have any upload facility).
Unfortunately for fetching objects already uploaded you cannot change the permission to public programmatically. For that please refer to this documentation https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/.
Highlighting the best possible approach for you (you can still refer
the document)
Use a bucket policy that grants public read access to a specific prefix
To grant public read access to a specific object prefix, add a bucket policy similar to the following:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::awsexamplebucket/publicprefix/*"]
}
]
}
Then, copy the objects into the prefix with public read access. You can copy an object into the prefix by running a command similar to the following:
aws s3 cp s3://awsexamplebucket/exampleobject s3://awsexamplebucket/publicprefix/exampleobject