How to delete all cookie when visitor visit my website? - web

I want to ask a question: How to delete all cookie when visitor visit my website. I am using wordpress.
I searched a lot of question like this my question but I can't find a satisfactory answers.
Please help me ! Sorry for my poor English !

You can either retrieve and manipulate cookies on the server side using PHP or client side, using JavaScript.
In PHP, you set cookies using setcookie(). Note that this must be done before any output is sent to the browser which can be quite the challenge in Wordpress. You're pretty much limited to some of the early running hooks which you can set via a plugin or theme file (functions.php for example), eg
add_action('init', function() {
// yes, this is a PHP 5.3 closure, deal with it
if (!isset($_COOKIE['my_cookie'])) {
setcookie('my_cookie', 'some default value', strtotime('+1 day'));
}
});
Retrieving cookies in PHP is much easier. Simply get them by name from the $_COOKIE super global, eg
$cookieValue = $_COOKIE['cookie_name'];
Unsetting a cookie requires setting one with an expiration date in the past, something like
setcookie('cookie_name', null, strtotime('-1 day'));
For JavaScript, I'd recommend having a look at one of the jQuery cookie plugins (seeing as jQuery is already part of Wordpress). Try http://plugins.jquery.com/project/Cookie
and refer this too
http://codex.wordpress.org/WordPress_Cookies
http://codex.wordpress.org/Function_Reference/wp_clear_auth_cookie

Most probably it's in PHP since it is WordPress platform. It's either using WordPress function or PHP function to do it.
<?php wp_clear_auth_cookie(); ?>

Related

How EditThisCookie can edit value of Local HttpOnly Cookie from Chrome?

I need to edit locally stored HtppOnly cookies because my java program doesn't have browser capability so that I intent it to browser to view and manage same functions. But now the remote server has been updated and they use HttpOnly cookies (for security reasons) which canNOT be read by document.cookie function or any other js code. So that I wanna try to write chrome extension but before this I need to understand background process of EditThisCookie because I can edit value of cookie via this extension.
So now my question is start here. I look for source code from github but after tracking extensions apply click functions to find its background process but it uses jquery call (I really dont understand all codes what does and how does it run so I need your help) and it does something and I want to know what is that. After that I can write my own code. Thanks.

Node.js and Php deployment

I have experience with Php from school. I know that if you want to build a website in php you simply need an host and write page with the .php extension.
You might need a database for crud operations of course.
Now I was trying to shift from php to node. I just can't understand how node works. It's probably my brain that can't reach the ha ah moment.
So first of all, let's say I want to create a website with login, sign up, different pages etc.. And I want to use node as server side language. In php you just needed to write in the actual page, upload on the server and it dose what you ask..
How this is possible in node.js? Pages are in javascript and javascript is editable from the browser becouse is clientside. So how do i write node and where? I can't understand how does it work...
How to chose a good host for node? I remember websites like Aruba hosting that just gave me space to upload my php pages. How does it work for node? Are there particular host?
Sorry guys I know it is maybe a dumb question. But I need to get out of this limbo..
Lets start by splitting up your questions:
So first of all, let's say I want to create a website with login, sign up, different pages etc.. And I want to use node as server side language. In php you just needed to write in the actual page, upload on the server and it dose what you ask..
First we have to understand WHY PHP works like that. According to TechTerms:
PHP Stands for "Hypertext Preprocessor." PHP is an HTML-embedded Web
scripting language. This means PHP code can be inserted into the HTML
of a Web page. When a PHP page is accessed, the PHP code is read or
"parsed" by the server the page resides on. The output from the PHP
functions on the page are typically returned as HTML code, which can
be read by the browser. Because the PHP code is transformed into HTML
before the page is loaded, users cannot view the PHP code on a page.
This of course, is the original purpose(and still is) of PHP, embedding server-side code in HTML without returning the source and only the intended output, thus it's name Hypertext Preprocessor.
The most classical example, is trying to return a custom message inside an HTML element:
<html>
<body>
<div> Hey <b><?php echo $username ?></b>, how are you today? </div>
</body>
</html>
The output would look something like this:
Hey RavenMask, how are you today?
This is PHP. There are of course, several frameworks available that do change the way you interact with PHP, like the MVC model that Laravel uses.
But now lets move on to NodeJS.
Node is very different from PHP, but still share equal concepts, like both being interpreted, and i believe this is where your confusion started, you probably thought that the code bellow was equivalent of PHP:
<html>
<body>
<div> Hey
<script>
document.write(`<b>RavenMask</b>`)
</script>, how are you today? </div>
</body>
</html>
And in some ways, it is, for client side.
NodeJS is a server-side runtime enviroment, in a more simple-to-understand comparison, NodeJS works exactly like C, Java or C++(Node does not need a compiler like GCC thought, it works using a JIT compiler, but that's for another question).
While PHP is a language designed from the ground to run over a webserver and become easily embeeded on HTML, Node on the other hand is designed more like a traditional language, so, for achieving a result like PHP, a LOT of steps are required, just like you would need with C, C++, Java and any other language.
In Node, to achieve what PHP does, you would need:
Creating an HTTP Server.
Creating an templating engine to mimic what PHP does inside HTML.
Luckily for us, all those steps have been already implemented in battle tested modules(and even in the native NodeJS default library!), so, the most barebones example that you can get is this:
const http = require('http')
// Here you're creating the HTTP server, like Apache
const server = http.createServer((request, response) => {
const username = 'RavenMask'
// Abuse templating strings!
// Respose.end means: Give this back to the client and end the cicle of the request
response.end(`
<html>
<body>
<div> Hey <b>${username}</b>, how are you today? </div>
</body>
</html>
`)
})
// Here is where you create the event loop for the HTTP server
server.listen(3000, (err) => {
if (err) {
return console.log('something bad happened', err)
}
console.log(`server is listening`)
})
Go on, save this file as index.js and on your terminal type:
node index.js
Then go to localhost:3000 on your browser and voilá!
This of course, is NOT production ready, so be careful.
To wrap it up, if you're still interested in Node, the steps you need to take are:
Understand how Node works, including its architecture and the most important, the event loop.
Use and abuse frameworks and modules. The example i gave you above, if you tried to expand it, would cause a lot of suffering and pain due the way it's designed, but it's fine so you can learn. Using frameworks and modules, you get battle-tested, community contributed code that can make your life easier, safer and production-ready, one good example, is creating an HTTP server with the Fastify Framework.
Don't give up: Learning a different language sure is hard, but can give you a lot of opportunities on the market and you can learn a LOT from looking things from a different angle, you usually end up acknowledging mistakes you do on another languages by looking on how things work from another.

Detecting where user has come from a specific website

I'm wanting to know if it's possible to detect which website a user has come from and serve to them different content based on which website they have just come from.
So if they've come from any other website on the internet and landed on my page, they will see my normal html and css page, but if they come from a specific website (this specific website would have also been developed by me so I have control over the code server-side and client-side) then I want them to see something slightly different.
It's a very small difference that I want them to see, and that's why I don't want to consider taking them to a different version of the website or a different page.
I'm also not sure if this solution will be placed on the page they coming from or the page that they arriving on?
Hope that's clear. Thanks!
I would add a URL parameter like http://example.com?source=othersite. This way you can easily adjust the parameter and can use javascript to detect this and slightly alter your landing page.
Otherwise, you can use the HTTP referrer sent via the browser to detect where they came from, but you would need to tell us your back end technology to get an example of that, as it differs a bit.
In javascript, you can do something as easy as
if(window.location.href.indexOf('source=othersite') > 0)
{
// alter DOM here
}
Or you can use a URL Parameter parser as suggested here: How to get the value from the GET parameters?
What you want is the Referer: HTTP header. It will give the URL of the page which the user came from. Bear in mind that the Referer can easily be spoofed, so don't take it as a guarantee if security is an issue.
Browsers may disable the referer, though. Why not just use a URL parameter?

Automatically saving web pages requiring login/HTTPS

I'm trying to automate some datascraping from a website. However, because the user has to go through a login screen a wget cronjob won't work, and because I need to make an HTTPS request, a simple Perl script won't work either. I've tried looking at the "DejaClick" addon for Firefox to simply replay a series of browser events (logging into the website, navigating to where the interesting data is, downloading the page, etc.), but the addon's developers for some reason didn't include saving pages as a feature.
Is there any quick way of accomplishing what I'm trying to do here?
A while back I used mechanize wwwsearch.sourceforge.net/mechanize and found it very helpful. It supports urllib2 so it should also work with HTTPS requests as I read now. So my comment above could hopefully prove wrong.
You can record your action with IRobotSoft web scraper. See demo here: http://irobotsoft.com/help/
Then use saveFile(filename, TargetPage) function to save the target page.

Easiest way to scrape Google for URLs via my browser?

I'd like to scrape all the URLs my searches return when searching for stuff via Google. I've tried making a script, but Google did not like it, and adding cookie support and captcha was too tedious. I'm looking for something that - when I'm browsing through the Google search pages - will simply take all the URLs on the pages and put them inside a .txt file or store them somehow.
Does any of you know of something that will do that? Perhaps a greasemonkey script or a firefox addon? Would be greatly appreciated. Thanks!
See the JSON/Atom Custom Search API.
I've done something similar for Google Scholar where there's no API available. My approach was basically to create a proxy web server (a java web app on Tomcat) that would fetch the page, do something with it and then show to user. This is 100% functional solution but requires quite some coding. If you are interested I can get into more details and put up some code.
Google search results are very easy to scrape. Here is an example in php.
<?
# a trivial example of how to scrape google
$html = file_get_contents("http://www.google.com/search?q=pokemon");
$dom = new DOMDocument();
#$dom->loadHTML($html);
$x = new DOMXPath($dom);
foreach($x->query("//div[#id='ires']//h3//a") as $node)
{
echo $node->getAttribute("href")."\n";
}
?>
You may try IRobotSoft bookmark addon at http://irobotsoft.com/bookmark/index.html

Resources