This might be a really dumb question, but is there a way to fetch without entering the server address? I'm wondering if I can just use "/init" instead of "http://localhost:3000/init"
try{
const result = await fetch("http://localhost:3001/init",
{
method:"GET",
headers:{
"content-type":"application/json"
}
});
response = await result.json();
}
catch(e){
console.log(e);
}
Is there a way to fetch without entering the server address
No.
In node.js, node-fetch requires a fully qualified URL. There is no "default" target domain or path that it could substitute like there is inside a browser web page with the browser version of fetch().
From the node-fetch documentation:
fetch(url[, options])
url should be an absolute url, such as https://example.com/.
A path-relative URL (/file/under/root) or protocol-relative URL
(//can-be-http-or-https.com/) will result in a rejected Promise.
If the problem you're really trying to solve here is to be able to write code that will work with different hosts (run locally and in a hosting environment), then you can set some sort of configuration variable with the hostname and then construct your URL using the host name in the configuration variable.
Related
Suppose I enter a (public) website that makes 3 XHR/fetch calls on 3 different endpoints:
https://api.example.com/path1
https://api.example.com/path2
https://api.example.com/path3
What I want to achieve is intercept the call to https://api.example.com/path2 only, redirect it to a local service (localhost:8000) and let path1 and path3 through to the original domain.
What kind of options do I have here? I have studied a lot of approaches to this issue:
DNS rewriting - this solution is not suitable as I still have to intercept path1 and path3, only redirect them to the original IPs and try to mimic the headers as much as possible - which means I would have to do a specific proxy configuration for each intercepted domain - this is unfeasible
Chrome extensions - found none to deal specifically with single endpoint intercepting
Overwriting both fetch and XmlHttpRequest after page load - still doesn't cover all scenarios, maybe some websites cache the values of fetch and XmlHttpRequest before page load (?)
Combining the chrome extension and fetch overwrite will work.
download an webextension that let you load javascript code before a given page loads, e.g. User JavaScript and CSS
Add the following script to run before your page loads, base on: Intercepting JavaScript Fetch API requests and responses
const { fetch: originalFetch } = window;
window.fetch = async (...args) => {
let [resource, config ] = args;
// request interceptor starts
resource = resource === "https://api.example.com/path2" ? "http://localhost:8000/path2" : resource
// request interceptor ends
const response = await originalFetch(resource, config);
// response interceptor here
return response;
};
Thank you for reading. It has litteraly been hours that i've been searching for my problem: as said in the title didn't found anything even looked at docs of nodejs but there isn't what i'm searching so it's weird seems like no one does that?
const https = require("https")
https.request("http://aratools.com/dict-service?query=%7B%22dictionary%22%3A%22AR-EN-WORD-DICTIONARY%22%2C%22word%22%3A%22%D9%86%D8%B9%D9%85%22%2C%22dfilter%22%3Atrue%7D&format=json&_=1643219263345",
(res) => {
console.log(res)
}
)
the error:
enter image description here
website used: http://aratools.com/
Trying to do experiment thing later on (have something in head but first have to manage to do this get request)
You need to specify https in the URL and not http if you're using the package https. Otherwise you should use the package http.
Use the http module instead of https.
const http = require("http")
http.request("http://aratools.com/dict-service?query=%7B%22dictionary%22%3A%22AR-EN-WORD-DICTIONARY%22%2C%22word%22%3A%22%D9%86%D8%B9%D9%85%22%2C%22dfilter%22%3Atrue%7D&format=json&_=1643219263345", (res) => {
console.log(res)
});
I am trying to fetch a site: link here. If you click on the link, it shows JSON: {"error":"Socket Error"}. I am trying to fetch that website, and return the error.
However, I get a 403 Forbidden error instead. Is there a reason for this? I turned CORS off, but I don't think it did anything. Here is an example of what I have tried:
async function b(){
error = await fetch('https://matchmaker.krunker.io/seek-game?hostname=krunker.io®ion=us-ca-sv&game=SV%3A4jve9&autoChangeGame=false&validationToken=QR6beUGVKUKkzwIsKhbKXyaJaZtKmPN8Rwgykea5l5FkES04b6h1RHuBkaUMFnu%2B&dataQuery=%7B%7D', {mode:'no-cors'}).then(res=>res.json())
console.log(JSON.stringify(error))
}
b()
Why doesn't anything seem to work?
Please comment if there is anything I need to add, this is my first Stack Overflow post so I am still slightly confused by what makes a good question. Thanks for helping!!
NOTE: My environment is Node.JS (testing on Repl.it which I think uses the latest Node version).
This particular host is protected width Cloudflare anti DDoS protection. The server doesn't accept requests made by fetch, but the do accept requests from curl. God knows why.
$ curl 'https://matchmaker.krunker.io/seek-game?hostname=krunker.io®ion=us-ca-sv&game=SV%3A4jve9&autoChangeGame=false&validationToken=QR6beUGVKUKkzwIsKhbKXyaJaZtKmPN8Rwgykea5l5FkES04b6h1RHuBkaUMFnu%2B&dataQuery=%7B%7D'
// => {"error":"Socket Error"}
You can use curl in node.js with node-libcurl package.
const { curly } = require('node-libcurl')
const url = 'https://matchmaker.krunker.io/seek-game?hostname=krunker.io®ion=us-ca-sv&game=SV%3A4jve9&autoChangeGame=false&validationToken=QR6beUGVKUKkzwIsKhbKXyaJaZtKmPN8Rwgykea5l5FkES04b6h1RHuBkaUMFnu%2B&dataQuery=%7B%7D'
curly.get(url)
.then(({ statusCode, data }) => console.log(statusCode, data))
// => 400 { error: 'Socket Error' }
Works as expected :-)
You can use a proxy such as allorigins.win which is a cors proxy that can retrieve the data from a URL in the form of json. You can fetch from this URL: https://api.allorigins.win/raw?url=https://matchmaker.krunker.io/game-list?hostname=krunker.io
I want to get the IP which is used in this request to the URL. I am getting the response required( HTML of the webpage ). Since I want to try to dynamically change Ips using AWS lambda when making a request, I first wanted to see what IP is actually used. Is there a way to get that? I am using 'request-promise' module for this.
NodeJS Code
const options = {
method: 'POST',
"rejectUnauthorized": false,
url: URL,
formData: {
data:data
}
}
const response = await request(options)
I solved this issue by making an API which makes request to my other NodeJS server. and then printing the IP in the second server.
Also AWS Lambda in default state have changing Ips
Is there a way to check what the protocol is of an external site using NodeJS.
For example, for the purposes of URL shortening, people can provide a url, if they omit http or https, I'd check which it should be and add it.
I know I can just redirect users without the protocol, but I'm just curious if there is a way to check it.
Sure can. First install request-promise and its dependency, request:
npm install request request-promise
Now we can write an async function to take a URL that might be missing its protocol and, if necessary, add it:
const rq = require('request-promise');
async function completeProtocol(url) {
if (url.match(/^https?:/)) {
// fine the way it is
return url;
}
// https is preferred
try {
await rq(`https://${url}`, { method: 'HEAD' });
// We got it, that's all we need to know
return `https://${url}`;
} catch (e) {
return `http://${url}`;
}
}
Bear in mind that making requests like this could take up resources on your server particularly if someone spams a lot of these. You can mitigate that by passing timeout: 2000 as an option when calling rq.
Also consider only requesting the home page of the site, parsing off the rest of the URL, to mitigate the risk that this will be abused in some way. The protocol should be the same for the entire site.