Fix/Replace DNN search-engine with FTP - search

I'm working on a DNN website, I have a user account with Admin privileges but don't have access to the Host Account. I do have FTP access and have been browsing around the file-structure and have seen some files referring to search.
The search is not working on the website so I was hoping I could replace the back-end code which runs the search, via FTP.
What files would need to be replaced to make sure they are not corrupted/buggy.
I realize doing this may not solve the problem, so any other advice as to trouble-shooting or possible solutions are appreciated.
EDIT(For those asking how in what way search does not work):
Here is an image of what happens when I search 'sheep' (the website is all about sheep). Was told by the company that original website that the search runs on our pages 'Keywords'. I've made sure pages contain keywords but they still do not show up in search.

The solution I ended up using for this problem because I could find no other solution without having the Super-User account access. Was to implement Google's Custom Search Engine, with the multi-page option.
http://www.google.com/cse/
In my case the original search engine was working via GET command with a value of q. This is the same as Google's CSE multi-page option. So I was able to simply remove the old search results html from a module and replace it with the html snippet provided by Google.

Related

Opening a read only URL with password

New to Web site dev, though have done a lot of coding of other sorts in the past, just set up a personal blog for my own amusement on bluehost using wordpress and have it installed locally for dev.
I have a training log on another site which anyone can open read-only to view training stats with a URL of the form below, this works fine in a browser:
https://www.othersite.com/logs/1234xyz/authenticate?password=thisismypassword
What I have tried to do unsuccessfully is have this open in a frame on one of my web pages (used iframe/object in html). It seems impossible to do this as the authentication string is not passed across, and the screen displayed prompts for manual input of the password. Can I open this automatically in some way?
If I understood well, your solution is insecure regardless it works or not. In this way your clients can see the password of your (or another) site.
I suggest to query the external content using custom php code and display (print) it on your page.
There are several ways to get content of an external page:
https://www.php.net/manual/en/function.stream-context-create.php
https://www.php.net/manual/en/function.curl-init.php
If you need a tutorial for WP plugins check this out:
https://www.wpbeginner.com/wp-tutorials/how-to-create-a-wordpress-plugin/

Retrieve Google results without using the Custom Search API

Recently I've been working on an idea that requires me to query Google Images and retrieve links for images matching that search term. My most promising candidate for a usable Google Images API was the Google Web Search API, but it looks like it's going to be going out of service as of tomorrow:
https://developers.google.com/web-search/docs/
The API that replaced it is the Google Custom Search API, but it's a little discouraging to use:
Google API Custom Search with Python - Programmatic Search Results
100 search results a day is a very strict limit; that's just four searches per hour. I also don't want to have to go through the hassle of creating some custom search bar that I'm never going to use except through Python
I decided to turn to parsing HTML directly from the results page. This presents a problem, though, because nowhere inside the page's HTML is there any direct link to the image, only referrer URLs. This is true of the javascript-enabled and javascript-disabled versions of Google Images (so even if Python spoofs javascript as enabled, nothing). I'm not sure where to go from here. Could anyone refer me to some obscure, updated library that I've somehow overlooked, or give me some pointers?
You could use Selenium Webdriver to actually execute the JavaScript and click on the images in the thumbnail view. Once an image has been opened, the link is in the DOM and you can scrape it from there. All Webdriver does is open an actual browser and simulate a user. You can even run it as a headless browser if you use xvfbwrapper. The downside is that even then, you will need all the dependencies of the browser you are using installed on your server.
However, scraping Google is against their terms of service and they will make an effort of blocking you as quickly as possible. So, unless you pass through the captchas (which are linked to sessions), you will possibly not be able to make a whole lot of searches before being blocked this way, either.

launch google search from link

I am running a website based on php on a server run by a large host. My goal is very simple. Include link on my site to google search where I dynamically give the search term.
Starting with the url that appears in the address bar, I've narrowed the syntax down to
http://www.google.com/search?q=test
This works when I type it into the address bar. However, when I launch from the server, it redirects to:
www.google.com/webhp...lots of characters
There are references on the web to webhp being related to a virus but I'm pretty sure my host does not have any viruses on its servers.
Does anyone know proper way to launch simple google search from a link? Is a straight link forbidden? I am Willing to use JS to push link to client if necessary (which I use for google maps at Google's recommendation due to usage limits) but want to keep things as simple as possible. This link is just to save people a few clicks.
Thanks for any suggestions.
Simply use the urlencode Method
<?php
echo '<a href="http://www.google.com/search?q=', urlencode($userinput), '">';
?>
If you wish to do it with Javascript the answer is here: Encode URL in JavaScript?
Try to track down the "Url Rewriting", I think its a virus you need to remove: http://www.ehow.com/how_8728291_rid-webhp.html
WebHP is a computer virus that automatically sets your homepage to a
fake Google site, known as Google.com/WebHP. This virus will also
randomly open windows or tabs to load this website, as well as
generate pop-ups and fake errors. Also installed with this virus is a
rootkit which can disable your PC's firewall and other methods of
security. If left untreated, the WebHP virus allows hackers to
remotely access your computer and steal personal information, such as
credit card numbers and email passwords.

search copies of data from all over internet

i need your help and want advice as developer point of view that how people are running like sites like copyscape.com bascially they search copies of data on whole internet i want to know how they are searching and making catalog of all website from internet same like google as google makes index of site from internet
please guide me how they are searching data from all over internet how its possible to keep track of each and every website on internet how google knows that there is new site on internet from where there crawlers knows that new website is launched so in short i want to know how can i develop a site in which i can search copies of data all over internet with out depending on any third party api plzzz advice me i hope you will help me
thanks
Google's crawlers don't know when a new site is launched. Usually developers must submit their sites to Google or get incoming links from sites that are indexed.
And nobody has a copy of the entire Internet. There are websites that are not linked and never get visited by any crawler. This is called the deep web and is generally inaccessible to crawlers.
How do they do it exactly? I don't know. Maybe they index popular sites where text is likely to be copied, like Blogger, ezinearticles, etc. And if they don't find the text on those sites, they simply say its original. Just a theory and I am probably wrong.
Me? I would probably use Google. Just take a good chunk of text from the website you are checking is copied and then filter out the results that are from the original website. And viola, you have the website that have that exact phrase which is presumably copied.

Using Google Docs from web app

We have a requirement for people to be able to look at documents people have uploaded to us (mainly word, possibly some rtf) via our web app. We want the user to be able to open the docs inside the browser, but keep the original formatting and not have the need for another application (like word, acrobat etc).
We thought about using google docs to do this, there appears to be some batch uploading options to get stuff in there but does anyone know if we can use the API's to keep the user on our site without them having to login to google docs themselves, and keep them still on our website with re-directing to google docs to view them.
Cheers
There's an option to make documents public (Somewhere in Share->Advanced Options).
Using api you can get list of documents in your google docs account, you can even search em. In your app you could make a link to the document in google docs which opens in a new window. That way your user will never navigate away from your page. An alternative would be to use an IFrame, but it's considered bad practice.
A completely different approach could be to automatically generate and host a pdf each time someone uploads a file. There are scripts/programs which can do that, just call them after you receive a file.

Resources