I'm creating an API that creates authorized API calls to Google's APIs, specifically Drive for this question. My API is working fine and uses Google's Node API to make the requests. When I fire off a request to this resource, I get back the following response:
{
"kind": "drive#file",
"id": "...",
"name": "bookmobile.jpg",
"mimeType": "image/jpeg"
}
I use the above response to determine the MIME type of the file I'm to display later. I then make a subsequent call to the same endpoint, but specifying alt=media as an option to download the file as specified in Google's Guide. If I console.log or res.send() the response, I get the following output:
Which we can see is the raw image bytes from the API call. How do I render these bytes to the response body properly? My code is as follows:
// DriveController.show
exports.show = async ({ query, params }, res) => {
if (query.alt && query.alt.toLowerCase().trim() === 'media') {
// Set to JSON as we need to get the content type of the resource
query.alt = 'json'
// Get the Files Resource object
const options = createOptions(query, params.fileId)
const filesResource = await Promise.fromCallback(cb => files.get(options, cb))
// Grab the raw image bytes
query.alt = 'media'
await createAPIRequest(createOptions(query, params.fileId), 'get', res, filesResource)
} else {
await createAPIRequest(createOptions(query, params.fileId), 'get', res)
}
}
async function createAPIRequest (options, method, res, filesResource = {}) {
try {
const response = await Promise.fromCallback(cb => files[method](options, cb))
if (filesResource.hasOwnProperty('mimeType')) {
// Render file resource to body here
} else {
res.json(response)
}
} catch (error) {
res.json(error)
}
}
Searching through various answers here all seem to point to the following:
res.type(filesResource.mimeType)
const image = Buffer.from(response, 'binary')
fs.createReadStream(image).pipe(res)
But this kills my Express app with the following error:
Error: Path must be a string without null bytes
How would I go about rendering those raw image bytes to the response body properly?
The Google API client returns binary data as a string by default, which will corrupt image data when it happens. (The issue is discussed on this thread: https://github.com/google/google-api-nodejs-client/issues/618). To fix, use the encoding: null option when requesting the file contents:
files[method](options, { encoding: null }, cb))
Related
In my svelte-kit application I was struggeling with this NODE error ERR_INVALID_URL but was able to fix it with a solution provided in this thread. Unfortunately a deeper explanation as of why NODE can't parse the url - which is obviously only a valid route when the code runs on the client - was ommitted.
In svelte-kit's load function I'm implicitly fetch -ing an, from nodejs' perspective, invalid url (ERR_INVALID_URL)
So what I'd love to understand is, WHY does NODE fail to resolve/parse the given url?
Prerequisits:
// in $lib/utils/http.js
export function post(endpoint, data = {}) {
return fetch(endpoint, {
method: "POST",
credentials: "include",
body: JSON.stringify(data),
headers: {
"Content-Type": "application/json",
},
}).then((r) => r.json());
}
// in routes/auth/login.js -> this endpoint can't be be found by NODE
export async function post({ locals, request }) {
// ...code here
return {
body: request.json()
}
}
Here the distinction has to be made of whether the code runs on the client or on the server:
// in routes/login.svelte
import { browser } from '$app/env';
import { post } from '$lib/utils/http.js';
export async function load() {
const { data } = someDataObject;
if (browser) { // NODE wouldn't be able to find the endpoint in question ('/auth/login'), whereas the client does
return await post(`/auth/login`, { data }).then((response) => {
// ...do something with the response
});
}
return {};
}
Thanks for any explanation that sheds some light into this.
You should refactor your load function to use the fetch provided by SvelteKit. This will allow you to use relative requests on the server, which normally requires an origin. From the docs (emphasis mine):
fetch is equivalent to the native fetch web API, with a few additional
features:
it can be used to make credentialed requests on the server, as it inherits the cookie and authorization headers for the page request
it can make relative requests on the server (ordinarily, fetch requires a URL with an origin when used in a server context)
requests for endpoints go direct to the handler function during server-side rendering, without the overhead of an HTTP call
during server-side rendering, the response will be captured and inlined into the rendered HTML
during hydration, the response will be read from the HTML, guaranteeing consistency and preventing an additional network request
So, get the fetch from the parameter passed to load...
export async function load({ fetch }) {
const { data } = someDataObject;
return await post(`/auth/login`, fetch, { data }).then((response) => {
// ...do something with the response
});
}
... and use it in your post function
// in $lib/utils/http.js
export function post(endpoint, fetch, data = {}) { /* rest as before */ }
A future enhancement to SvelteKit may make it so you don't have to pass fetch to your utility function, but this is what you have to do for now.
It seems to be difficult problem (or impossible??).
I want to get and read HTTP Response, caused by HTTP Request in browser, under watching Chrome Extension background script.
We can get HTTP Request Body in this way
chrome.webRequest.onBeforeRequest.addListener(function(data){
// data contains request_body
},{'urls':[]},['requestBody']);
I also checked these stackoverflows
Chrome extensions - Other ways to read response bodies than chrome.devtools.network?
Chrome extension to read HTTP response
Is there any clever way to get HTTP Response Body in Chrome Extension?
I can't find better way then this anwser.
Chrome extension to read HTTP response
The answer told how to get response headers and display in another page.But there is no body info in the response obj(see event-responseReceived). If you want to get response body without another page, try this.
var currentTab;
var version = "1.0";
chrome.tabs.query( //get current Tab
{
currentWindow: true,
active: true
},
function(tabArray) {
currentTab = tabArray[0];
chrome.debugger.attach({ //debug at current tab
tabId: currentTab.id
}, version, onAttach.bind(null, currentTab.id));
}
)
function onAttach(tabId) {
chrome.debugger.sendCommand({ //first enable the Network
tabId: tabId
}, "Network.enable");
chrome.debugger.onEvent.addListener(allEventHandler);
}
function allEventHandler(debuggeeId, message, params) {
if (currentTab.id != debuggeeId.tabId) {
return;
}
if (message == "Network.responseReceived") { //response return
chrome.debugger.sendCommand({
tabId: debuggeeId.tabId
}, "Network.getResponseBody", {
"requestId": params.requestId
}, function(response) {
// you get the response body here!
// you can close the debugger tips by:
chrome.debugger.detach(debuggeeId);
});
}
}
I think it's useful enough for me and you can use chrome.debugger.detach(debuggeeId)to close the ugly tip.
sorry, mabye not helpful... ^ ^
There is now a way in a Chrome Developer Tools extension, and sample code can be seen here: blog post.
In short, here is an adaptation of his sample code:
chrome.devtools.network.onRequestFinished.addListener(request => {
request.getContent((body) => {
if (request.request && request.request.url) {
if (request.request.url.includes('facebook.com')) {
//continue with custom code
var bodyObj = JSON.parse(body);//etc.
}
}
});
});
This is definitely something that is not provided out of the box by the Chrome Extension ecosystem. But, I could find a couple of ways to get around this but both come with their own set of drawbacks.
The first way is:
Use a content script to inject our own custom script.
Use the custom script to extend XHR's native methods to read the response.
Add the response to the web page's DOM inside a hidden (not display: none) element.
Use the content script to read the hidden response.
The second way is to create a DevTools extension which is the only extension that provides an API to read each request.
I have penned down both the methods in a detailed manner in a blog post here.
Let me know if you face any issues! :)
To get a XHR response body you can follow the instructions in this answer.
To get a FETCH response body you can check Solution 3 in this article and also this answer. Both get the response body without using chrome.debugger.
In a nutshell, you need to inject the following function into the page from the content script using the same method used for the XHR requests.
const constantMock = window.fetch;
window.fetch = function() {
return new Promise((resolve, reject) => {
constantMock.apply(this, arguments)
.then((response) => {
if (response) {
response.clone().json() //the response body is a readablestream, which can only be read once. That's why we make a clone here and work with the clone
.then( (json) => {
console.log(json);
//Do whatever you want with the json
resolve(response);
})
.catch((error) => {
console.log(error);
reject(response);
})
}
else {
console.log(arguments);
console.log('Undefined Response!');
reject(response);
}
})
.catch((error) => {
console.log(error);
reject(response);
})
})
}
If response.clone().json() does not work, you can try response.clone().text()
I show my completed code if it can be some help. I added the underscore to get the request url, thanks
//background.js
import _, { map } from 'underscore';
var currentTab;
var version = "1.0";
chrome.tabs.onActivated.addListener(activeTab => {
currentTab&&chrome.debugger.detach({tabId:currentTab.tabId});
currentTab = activeTab;
chrome.debugger.attach({ //debug at current tab
tabId: currentTab.tabId
}, version, onAttach.bind(null, currentTab.tabId));
});
function onAttach(tabId) {
chrome.debugger.sendCommand({ //first enable the Network
tabId: tabId
}, "Network.enable");
chrome.debugger.onEvent.addListener(allEventHandler);
}
function allEventHandler(debuggeeId, message, params) {
if (currentTab.tabId !== debuggeeId.tabId) {
return;
}
if (message === "Network.responseReceived") { //response return
chrome.debugger.sendCommand({
tabId: debuggeeId.tabId
}, "Network.getResponseBody", {
"requestId": params.requestId
//use underscore to add callback a more argument, passing params down to callback
}, _.partial(function(response,params) {
// you get the response body here!
console.log(response.body,params.response.url);
// you can close the debugger tips by:
// chrome.debugger.detach(debuggeeId);
},_,params));
}
}
I also find there is a bug in chrome.debugger.sendCommand. If I have two requests with same URI but different arguments. such as:
requests 1:https://www.example.com/orders-api/search?limit=15&offer=0
requests 2:https://www.example.com/orders-api/search?limit=85&offer=15
The second one will not get the corrected responseBody, it will show:
Chrome Extension: "Unchecked runtime.lastError: {"code":-32000,"message":"No resource with given identifier found"}
But I debugger directly in background devtools, it get the second one right body.
chrome.debugger.sendCommand({tabId:2},"Network.getResponseBody",{requestId:"6932.574"},function(response){console.log(response.body)})
So there is no problem with tabId and requestId.
Then I wrap the chrome.debugger.sendCommand with setTimeout, it will get the first and second responseBody correctly.
if (message === "Network.responseReceived") { //response return
console.log(params.response.url,debuggeeId.tabId,params.requestId)
setTimeout(()=>{
chrome.debugger.sendCommand({
tabId: debuggeeId.tabId
}, "Network.getResponseBody", {
"requestId": params.requestId
//use underscore to add callback a more argument, passing params down to callback
}, _.partial(function(response,params,debuggeeId) {
// you get the response body here!
console.log(response.body,params.response.url);
// you can close the debugger tips by:
// chrome.debugger.detach(debuggeeId);
},_,params,debuggeeId));
},800)
}
I think the setTimeout is not the perfect solution, can some one give help?
thanks.
var unirest = require("unirest");
var req = unirest("GET", "https://edamam-edamam-nutrition-analysis.p.rapidapi.com/api/nutrition-data");
req.query({
"ingr": "1 large apple"
});
req.headers({
"x-rapidapi-host": "HOST",
"x-rapidapi-key": "KEY",
"useQueryString": true
});
req.end(function (res) {
if (res.error) throw new Error(res.error);
console.log(res.body);
});
Im trying to make an API call with that doc and parameters in order to get a list of ingredients based on a search parameter.
This is my service:
async getAllNutrients(ingr: string) {
console.log(ingr);
const headersRequest = {
'x-rapidapi-host': 'edamam-edamam-nutrition-analysis.p.rapidapi.com',
'x-rapidapi-key': '5664b75c9fmsh66ac8e054422eb9p1600b8jsn878d097e8d2a',
useQueryString: true,
};
const result = await this.httpService.get(
`https://edamam-edamam-nutrition-analysis.p.rapidapi.com/api/nutrition-data` +
ingr,
{ headers: headersRequest },
);
console.log(result);
return result;
}
And this is my controller
#Get('/list?:ingr')
getMacros(#Query('ingr') ingr) {
return this.macroService.getAllNutrients(ingr);
}
I tried to change QUery and Param but none are working.
On postman i make an API call like this:
"localhost:3000/macros/list?ingr="1 large apple"
And my 2 console.log returns:
"1 large apple"
Observable { _isScalar: false, _subscribe: [Function] }
[Nest] 7460 - 2020-09-21 16:00:55 [ExceptionsHandler] Request failed with status code 404 +441782ms
Error: Request failed with status code 404
I tried to use pipe like this example:
getQuote(id){
return this.http.get('http://quotesondesign.com/wp-json/posts/' + id)
.pipe(
map(response => response.data)
);
}
But the result was the same. Any help?
Looks like your issue is in your controllers route. Changing #Get('/list?:ingr') to #Get('/list') should resolve this. I believe passing ?:ingr in the path is setting a param with key ingr.
Queries do not need to be added to the route. They are accessed using the #Query decorator.
Look at this for more info.
In your service function
const result = await this.httpService.get(
`https://edamam-edamam-nutrition-analysis.p.rapidapi.com/api/nutrition-data` +
ingr,
{ headers: headersRequest },
);
with ingr is 1 large apple then the API URL will become "https://edamam-edamam-nutrition-analysis.p.rapidapi.com/api/nutrition-data1 large apple".
I think this is an incorrect API URL, and you don’t want to call the API like that.
Change it to
`https://edamam-edamam-nutrition-analysis.p.rapidapi.com/api/nutrition-data?ingr=${ingr}`,
When we use Axios we always have to get the data from response. Like this:
const response = await Axios.get('/url')
const data = response.data
There is a way to make Axios return the data already? Like this:
const data = await Axios.get('/url')
We never used anything besides the data from the response.
You can use ES6 Destructing like this:
const { data } = await Axios.get('/url');
So you won't have write another line of code.
add a response interceptors
axios.interceptors.response.use(function (response) {
// Any status code that lie within the range of 2xx cause this function to trigger
// Do something with response data
return response.data; // do like this
}, function (error) {
// Any status codes that falls outside the range of 2xx cause this function to trigger
// Do something with response error
return Promise.reject(error);
});
what i normally do is create a js file called interceptors.js
import axios from 'axios';
export function registerInterceptors() {
axios.interceptors.response.use(
function (response) {
// Any status code that lie within the range of 2xx cause this function to trigger
// Do something with response data
return response.data;
},
function (error) {
// Any status codes that falls outside the range of 2xx cause this function to trigger
// Do something with response error
return Promise.reject(error);
}
);
}
in ./src/index.js
import { registerInterceptors } from './path/to/interceptors';
registerInterceptors();//this will register the interceptors.
For a best practice don't use axios every where, just in case in the future if you want to migrate to a different http provider then you have to change everywhere it uses.
create a wrapper around axios and use that wrapper in your app
for ex:
create a js file called http.js
const execute = ({url, method, params, data}) => {
return axios({
url,
method,//GET or POST
data,
params,
});
}
const get = (url, params) => {
return execute({
url, method: 'GET', params
})
}
const post = (url, data) => {
return execute({
url, method: 'POST', data
})
}
export default {
get,
post,
};
and use it like
import http from './http';
....
http.get('url', {a:1, b:2})
so now you can customize all over the app, even changing the http provider is so simple.
I am crafting a simple json object and uploading it to digital ocean using the s3.putObject function. There are no problems getting it to upload but when I look at it on digital ocean, only the key is there in the json object, and the value shows {}
Here is the code creating the JSON, and uploading it:
async function sendErrorData(error){
var errorfile = {
'errorLog' : error
}
console.log(errorfile)
const params = {
Body: JSON.stringify(errorfile),
Bucket: 'MyBucket'
Key: 'errors.json',
ContentType: "application/json"
};
await uploadToDO(params)
.then((data) => console.log(JSON.stringify(data)))
.catch((err) => console.log(JSON.stringify(err)))
console.log(errorfile)
}
function uploadToDO(params) {
return s3.putObject(params).promise()
}
The console logs before and after the upload show the object perfectly fine, but once uploaded it's missing the values like this.
{
"errorLog": ReferenceError: ....
}
Uploaded:
{
"errorLog": {}
}
{
"errorLog": ReferenceError: ....
}
Is invalid JSON by the looks of things. You ask AWS to upload as application/json file and it is not so fails.
Therefore when you construct the errorfile
var errorfile = {
'errorLog' : JSON.stringify(error)
}
Note: This will save the error possibly as a string and not as a JSON object. If you need it as a JSON object you'd need to construct it yourself.
You are awaiting on the function call uploadToDO(params). But the function uploadToDO is not defined as an async function.
it should be:
async function uploadToDO(params) {
return s3.putObject(params).promise()
}
Hope this helps.