My goal is to get the image from Twilio api to store it somewhere else because Twilio deletes media after 4h.
(See here)
I'm using node and I have read https://www.twilio.com/docs/sms/api/media-resource#fetch-a-media-resource
...and it says that to do a request to this URL without the "json" extension should return the media with its original MIME type
https://api.twilio.com/2010-041/Accounts/{AccountSid}/Messages/{MessageSid}/Media/{Sid}.json
However, i need auth, so I need to use
const client = require('twilio')(accountSid, authToken);
How can I fetch the image? Any sample code to achieve it? In the docs seem to do it without auth.
UPDATE ----------------------------------------
After accessing the MediaUrl0 on the browser, twilio redirects me to the following URL:
https://s3-external-1.amazonaws.com/media.twiliocdn.com/{AccountSid}/{?}
I was thinking of building my own URL but i dont know how to get the {?}
You do not need authentication to retrieve media for an incoming sms . They are all hosted (as of now on Aws S3) and accessible publicly through a (hard to guess) url.
you can access them using any http client
Related
✨ Hello everyone!✨
General Problem:
I have a web app that has about 50 images that shouldn't be able to be accessed before the user logs into the site. This should be a simple answer I suspect, there are plenty of sites that also require this basic protection. Maybe I do not know the right words to google here, but I am having a bit of trouble. Any help is appreciated.
App details:
My web app is built in typescript react, with a node.js/express/mongoDB backend. Fairly typical stuff.
What I have tried:
My best thought so far was to upload them into the public folder on the backend server hosted on heroku. Then I protected the images with authenication middlewear to any url that had "/images/" as a part of it. This works, partially. I am able to see the images when I call the api from postman with the authenication header. But I cannot figure out a way to display that image in my react web app. Here is the basic call I used.
fetch(url,
{
headers: {
Authorization:token,
},
}
);
and then the actual response is just an empty object when I try to copy it
{}
but I also get this when I console log the pure response, some kind of readable stream:
from following related question
I came up with the following: (which is normally wrapped in a asyc function)
const image = await fetch(url,{headers:{ Authorization:token}});
const theBlob = await image.blob();
console.log(URL.createObjectURL(theBlob));
which gives me the link: http://localhost:3000/b299feb8-6ee2-433d-bf05-05bce01516b3 which only displays a blank page.
Any help is very much appreciated! Thanks! 😄
After lots of work trying to understand whats going on, here is my own answer:
const image = await axios(url, { responseType: "blob", headers: {Authorization: token }});
const srcForImage = URL.createObjectURL(image.data)
Why it makes sense now
So I did not understand the innerworkings of what was going on. Please correct me, but the following is my understanding:
So the image was being sent in binary. What I had to do to fix that was to set the reponseType in axios as "blob", which then sent a blob, which I believe means its base 64 encoded instead. Then the function URL.createObjectURL does some magic, and must save it to the browser as part of the page. Then we can just use that as the image url. When you visit it yourself, you must type the 'blob:' part of the url it give you too, otherwise its blank, or stick it in <img src={srcForImage}/> and it works great. I bet it would've worked in the original fetch example in the question, I just never put the url in a tag or included 'blob:' as part of the URL.
That's correct, you send the auth token and the backend uses that to auth the user (check that he exists in the DB, that he has the correct Role and check the jwt too)
The server only responds with the images if the above is true
If your server is responding with an empty object then the problem is the backend not the frontend, console.log what you're sending to the frontend
I want to access the binary file uploaded data from the browser.
Nest.js server application works fine from postman but application throws a 400 error when the request is made from the Google Chrome/Angular application.
Thoughts on how to make the Nest.js application accept binary file data? Please let me know.
Attachments for reference:
Postman Request
Google Chrome Request
I have a service that shares html to multiple client web sites. I need to know The URL of where the request is coming from.
The client will add a custom script to their website and the script will load Firebase SDK and call one of my callable firebase functions.
exports.testFunction = functions.https.onCall(async (data, context) => {
//How do you access the requesting URL?
console.log(context.rawRequest.originalUrl) "/"
console.log(context.rawRequest.url) "/"
})
Thank you,
HTTP requests to callable functions don't really come "from" a URL. They come from anywhere on the internet. It could be a web site, Android or iOS app, or someone who simply knows the protocol to call the function.
If you're building a web app and you want to pass along the URL of the page making the request, you'll have to add that data into the object that the client passes to the function, which shows up in data.
I want to send a request to this Amazon Alexa API.
That page contains the last 50 activities I made with my Amazon Echo. The page returns JSON. Before you can request that page, you need to authorize your account, so the proper cookies are set in your browser.
If I do something simple as:
const rp = require("request-promise");
const options = {
method: "GET",
uri: "https://alexa.amazon.com/api/activities?startTime=&size=50&offset=-1",
json: true
};
rp(options).then(function(data) {
console.log(data);
}).catch(function(err) {
console.log(err);
});
I can send a GET request to that URL. This works fine, except Amazon has no idea it's me who's sending the request, because I haven't authorized my NodeJS application.
I've successfully copied ~10 cookies from my regular browser into an incognito tab and authorized that way, so I know copying the cookies will work. After adding them all using tough-cookie, it didn't work, unfortunately. I still got redirected to the signin page (according to the error response).
How do I authorize for this API, so I can send my requests?
I have been looking for a solution for this too. The best idea I have is to use account linking, but I haven't try it yet. Looks like ASK-CLI has interface for this also, but I can't figure it out how to use it (what is that URL?). For linking account to 3rd party server is not easy, but link it back to Amazon for the json API should not be that complicated.
I try to use download_url to get file from soundcloud.
I either get 'redirected' or '401 unauthorized', how can I download/stream it to client side?
thanks
If you are getting a 401 response, then you should include your client_id in the request. It might also be that it's a private sound, in which case, you'd also have to include the oauth credentials.
The actual success response is a redirect, since the stream files are accessible from a different server, but only with special time-limited access tokens (included in the redirect response). Basically, just follow the redirect and you'll have your stream.
You are using Wrong Url Structure...
Try This ,Its Working (I tested) :
http://api.soundcloud.com/tracks/773150/download?client_id={Your ID Here}
You Should Use ? Instead Of & before client_id