I want to create a re-direct in JavaScript for the following functionality;
Once someone has landed on a webpage, there will be a JavaScript that will temporarily direct them onto a webpage (they will be on this webpage for a very short time, so the page can capture the cookie/IDs), and then redirected again to a different webpage;
So for example:
I land on www.bbc.co.uk?ID=1234, I then get directed to another webpage, which is carrying my query string, www.google.com?ID=1234, then immediately I get directed to the final webpage, www.facebook.com.
And there maybe 1, or more query strings that will need to be carried over from bbc onto google. No query strings on the third and final webpage.
I want to ask, if this is possible? It must be in JavaScript, I know to direct somebody in JS it's simply;
So far, my workings as:
window.location.href =
I can get the query strings by:
var vars = [], hash;
var q = document.URL.split('?')[1];
if(q != undefined){
q = q.split('&');
for(var i = 0; i < q.length; i++){
hash = q[i].split('=');
vars.push(hash[1]);
vars[hash[0]] = hash[1];
} }
Help/Advise will be very appreciated!
You can get the query string of the window with the following call: window.location.search
http://www.w3schools.com/jsref/obj_location.asp
Related
I am currently working on a Discord bot, which uses node.js, to scrape info from Dododex.com, a database of creatures from the game ARK. I found a very useful library called table-scraper which scrapes tables from web pages and returns the data as an object (or array of objects for multiple tables). In the background, table-scraper uses another library called x-ray, along with the common web request-maker request. Keep in mind that I don't know that much about node or how request works.
What I have working is asking the user for the name of a creature to get the data for. I then check that name against a list of creatures (that I have in a file/JSON object) to see if that name is valid.
Now, what I want to get working is to scrape the relevant data from the dododex page belonging to that creature using table-scraper. Table-scraper is definitely working with dododex, but when I call it to gather the data and put it into an object, it seems to start scraping, but then lets the function return with an empty object. If the program didn't crash right after the return (caused by the attempt to access an empty object), I know from testing that it would finish scraping, even though the function the scraping was called in has already finished.
Here's where the relevant code starts:
const creature = getDinoFromName(args[0].toLowerCase(), arkdata); //Function explained below
if(creature.health.base == 404) //If the creature's name is invalid, getDinoFromName() will return a creature with health.base = 404
{
console.log("unknown");
unknownCreature(args, msg); //Tells the user that the creature is unknown
return;
}
else
{
//Outputs the data
}
And here's where the data gets gathered:
function getDinoFromName(name, arkdata)
{
for(let i = 0; i < arkdata.creatures.length; i++) //Loops through arkdata, a json object with the possible names of all the creatures
{
if(arkdata.creatures[i].possible_names.includes(name)) //If the name the user provided matches one of the possible names of a creature (creatures can have multiple possible names, i.e. rex, t-rex, tyrannosaurus rex)
{
var creature;
console.log("found");
tableScraper.get("http://www.dododex.com/taming/" + arkdata.creatures[i].possible_names[0]).then(function(tableData)
{
console.log("scraped");
var stats = tableData[1]; //tableData is an array of 3 tables matching the 3 html tables found on the site, tableData[1] is the one with the creature's stats. It looks like this: http://ronsoros.github.io/?31db928bc662538a80bd25dd1207ac080e5ebca7
creature = //Now the actual creature object, parsed from tableData[1]
{
"health":
{
"base": stats[0]["Base (Lvl 1)"], //
"perwild": stats[0]["Increase Per Lvl (Wild)"],
"pertamed": stats[0]["Increase Per Lvl (Tamed)"]
}
//There is more data to parse but I only included hp as an example
}
});
return creature;
}
}
//If the creature name is not found to be one of the possible names, returns this to signify the creature was not found
return {"health":{"base":404}};
}
Running the code result in node crashing and the error
/app/server.js:320
if(creature.health.base == 404)
TypeError: Cannot read property 'health' of undefined
at stats (/app/server.js:320:15)
at Client.client.on.msg (/app/server.js:105:7)
Something wonky is going on here, and I really can't figure it out. Any help is appreciated.
EDIT: I'm not looking for Facebook APIs! I'm simply using Facebook as an example. I intend to get my browser to perform actions on different websites that likely have no APIs.
Let's say I wish to create a program that will log into Facebook, lookup my friends list, visit each one of their profiles, extract the date + text of each post and write this to a file.
I have an idea how the algorithm should work. But I have absolutely no clue how to interface my code with the browser itself.
Now I'm a Java programmer, so I would very much imagine the pesudo code in Java would be to create a Browser Object then convert the current page's contents to HTML code so that the data can be parsed. I provided an example code below of what I think it ought to look like.
However is this the right way that I should be doing it? If it is, then where can I find a web browser object? Are there any parsers I can use to 'read' the content? How do I get it to execute javascript such as clicking on a 'Like' button?
Or are there other ways to do it? Is there a GUI version and then I can simply command the program to go to X/Y pixel position and click on something. Or is there a way to write the code directly inside my FireFox and run it from there?
I really have no clue how to go about doing this. Any help would be greatly appreciated! Thanks!
Browser browser = new Browser();
browser.goToUrl("http://facebook.com");
//Retrieve page in HTML format to parse
HtmlPage facebookCom = browser.toHtml();
//Set username & password
TextField username = facebookCom.getTextField("username");
TextField password = facebookCom.getTextField("password");
username.setText("user123");
password.setText("password123");
facebookCom.updateTextField("username", username);
facebookCom.updateTextField("password", password);
//Update HTML contents
browser.setHtml(facebookCom);
// Click the login button and wait for it to load
browser.getButton("login").click();
while (browser.isNotLoaded()) {
continue;
}
// Click the friends button and wait for it to load
browser.getButton("friends").click();
while (browser.isNotLoaded()) {
continue;
}
//Convert the current page (Friends List) into HTML code to parse
HtmlPage facebookFriends = browser.toHtml();
//Retrieve the data for each friend
ArrayList<XMLElement> friendList = facebookFriends.getXmlElementToArray("friend");
for (XMLElement friend : friendList) {
String id = friend.getId();
//Visit the friend's page
browser.goToUrl("http://facebook.com/" + id);
while (browser.isNotLoaded()) {
continue;
}
//Retrieve the data for each post
HtmlPage friendProfile = browser.toHtml();
ArrayList<XMLElement> friendPosts = friendProfile.getXmlElementToArray("post");
BufferedWriter writer = new BufferedWriter(new File("C:/Desktop/facebook/"+id));
//Write the date+text of every post to a text file
for (XMLElement post : friendPosts) {
String date = post.get("date");
String text = post.get("text");
String content = date + "\n" + text;
writer.append(content);
}
}
I think you are thinking about this the wrong way. You wouldn't really want to write a program to scrap the screen via the browser. It looks like you could take advantage of facebooks rest api and query for the data you are looking for. A link to get a users posts via rest api:
https://developers.facebook.com/docs/graph-api/reference/v2.6/user/feed
You could get their users id's from this endpoint:
https://developers.facebook.com/docs/graph-api/reference/friend-list/
Then plug the user ids into the first rest endpoint that was linked. Once you get your data coming back correctly via the rest api its fairly trivial to write that data out to a file.
I've been trying to figure auto-complete some values from netsuite onto our custom html form.
After a bit of researching, I found this gem: nlapiGetContext (http://www.netsuite.com/portal/developers/resources/APIs/Dynamic%20HTML/SuiteScriptAPI/MS_SuiteScriptAPI_WebWorks.1.1.html)
which should do exactly what it says,
However, when doing a console.log dump of nlapigetcontext()
the following information is displayed, not my current logged in user information
Here is my current test script:
if (window.addEventListener) { // Mozilla, Netscape, Firefox
window.addEventListener('load', WindowLoad, false);
} else if (window.attachEvent) { // IE
window.attachEvent('onload', WindowLoad);
}
function WindowLoad(event) {
alert(nlapiGetContext().getCompany());
console.log(nlapiGetContext());
}
Any help or guidance is appreciated!
Thank you!
Where is this form located? Context will only work if you are logged into the system, so this won't apply for online customer forms, those are considered to be "outside the system".
You can write a Suitelet to retrieve data from an external form if you are only retrieving values.
I use this to get campaign information on an external landing page.
function getCamData(request, response){
if ( request.getMethod() == 'GET' ){
response.setHeader('Custom-Header-CamID', 'CamID');
var camid = request.getParameter('camid');
var rec = nlapiLoadRecord('campaign', camid);
var o = new Object();
o.thisid = camid;
o.promocode = rec.getFieldValue('campaignid');
o.phone = rec.getFieldValue('custevent_cam_1300num');
o.family = rec.getFieldValue('family');
var myString = JSON.stringify(o);
response.write (myString);
}}
You request something like this:
https://forms.netsuite.com/app/site/hosting/scriptlet.nl?script=188&deploy=1&compid=xxxxxx&h=fb8224b74b24907a79e6&camid=8020
And returns something like this:
{"thisid":"8020","promocode":"CAM999","phone":"1800 111 222","family":"12"}
Also you can do server-side posting from an external site to a NetSuite customer online form, it will capture and validate the data as far as it has the entry fields set in NS, this is a great way to avoid using those horrible iframes.
Use these functions
nlapiGetContext().getName()
nlapiGetContext().getUser()
nlapiGetContext().getRole()
nlapiGetContext().getRoleId()
nlapiGetContext().getRoleCenter()
nlapiGetContext().getEmail()
nlapiGetContext().getContact()
nlapiGetContext().getCompany()
nlapiGetContext().getContact()
nlapiGetUser()
nlapiGetDepartment()
For details check http://suitecoder.appspot.com/static/api.html
How do you deal with the fact, that URLs are case sensitive in xPages even for parameters? For example URL:
my_page.xsp?folderid=785478 ... is not the same as ...
my_page.xsp?FOLDERID=785478
How to make, for example, a proper check that params contain some key e.g.
param.containsKey("folderid") which desnt work when there is 'FOLDERID' in URL.
I'd suggest defining a couple convenience #Functions:
var #HasParam = function(parameter) {
var result:boolean = false;
for (var eachParam : param.keySet()) {
if (eachParam.toLowerCase() == parameter.toLowerCase()) {
result = true;
break;
}
}
return result;
};
var #GetParam = function(parameter) {
var result = "";
if (#HasParam(parameter)) {
for (var eachParam : param.keySet()) {
if (eachParam.toLowerCase() == parameter.toLowerCase()) {
result = param.get(eachParam);
break;
}
}
}
return result;
};
Then you can safely query the parameters without caring about case. For bonus points, you could add requestScope caching so that you can skip looping through the keySet if you're examining a parameter that you've previously looked at during the same request.
you may use this function:
context.getUrlParameter('param_name')
then test if it's null or not.
make sure to decide for one,so either upper or lowercase
other than that i'd suggest something like
KeyValuePair<string,string> kvp = null;
foreach(KeyValuePair<string,string> p in param)
{
if(UPPERCASE(p.Key) == UPPERCASE("folderid"))
{
kvp = p;
break;
}
}
syntax isn't correct and idk the uppercase method in c# right now,but you get the point
The easiest answer is ofcourse the obvious. Be sure that the parameters you are using througout your application are always the same on every url you are generating and know what to expect. A good approach to accomplish this is to create a ssjs function which generates url's for you according to the objects you submit.
In this function you could check which object you are receiving and with the use of keywords and so forth generate the correct url. This way generating twice a url with the same input parameters should always generate the exact same url.
another option would be just to double check with a bit of code like this
var key = "yourkey";
if(param.contains(#uppercase(key)) || param.contains(#lowercase(key)){
// do stuff
}
But should not be necesarry if the url you are parsing is generated by your own application
Edit after post of topic starter
Another option would be to grap the url directly from from the facescontext and to convert it to a string first. When it is a string you can parse the parameters yourself.
You can combine server side substitution/redirection to get around the issue that David mentioned. So a substitution rule will redirect incoming patern like this:
http://myhost/mypage/param (/mypage/* => which converts to - /dbpath/mypage.xsp?*) - substitution is tricky so please handle with care.
Also I believe I read somewhere that context.getUrlParameter is not case sensitive - can someone please confirm this.
Hope this helps.
I'm trying to create a show function which needs to access to two documents: The document in 'doc' reference and another document called 'users'
My function looks like:
function(doc,req){
var friends = doc.friends;
var listFriends = [];
for(int i = 0; i<friends.length; i++){
var phone = friends[i].phone;
if(users[phone] != "" ){
listFriends.push(users[phone]);
}
}
return JSON.stringify(listFriends);
}
I'm not an expert nor javascript neither couchdb. My question is, Is it possible to access to the second document (users) in a similar way like in the code? So far it returns a compilation error.
Thanks
You can only access one document in a CouchDB show function. You could look at using a list function, which works on view results instead of documents.
Create a view where the two documents collate together (appear side-by-side in the view order) and you achieve an effect pretty close to what you wanted to achieve with the show function.