I inherited a Google Mini and am trying to work around the limitation of not being able to index SharePoint (2003) natively. I can map a UNC path to a site's Shared Documents folder in Windows or Mac OS X, but I am stuck somewhere with the Mini. Or am I crazy? Is this not a valid workaround?
For "Start Crawling from the Following URLs:" and "sharepoint.mysite.com\DavWWWRoot\sites\siteName\LibraryName\" I have the following:
\\sharepoint.mysite.com\DavWWWRoot\sites\siteName\LibraryName\
Since this needs authentication I have the same URL in Crawler Access with a tested username/pwd combo.
The Crawl Diagnostics gives me a Error: Document not found (404). for the same URL. (The Mini displays it in the diagnostics as unc://sharepoint.mysite.com/DavWWWRoot/sites/siteName/LibraryName/)
Under Network Settings I ran the Network Diagnostics for the aforementioned URL and received the following:
Test URL unc://sharepoint.mysite.com/DavWWWRoot/sites/siteName/LibraryName/ returncode 404, should be 200
Test URL unc://sharepoint.mysite.com/DavWWWRoot/sites/siteName/LibraryName/ OK - pingable
Is what I'm trying possible via the Google Mini?
I had posted a link to a product that I wrote which got deleted (You can google to find it). Mounting via UNC is not the best approach. However, try using smb:// instead of unc for your pathes.
Related
New to Web site dev, though have done a lot of coding of other sorts in the past, just set up a personal blog for my own amusement on bluehost using wordpress and have it installed locally for dev.
I have a training log on another site which anyone can open read-only to view training stats with a URL of the form below, this works fine in a browser:
https://www.othersite.com/logs/1234xyz/authenticate?password=thisismypassword
What I have tried to do unsuccessfully is have this open in a frame on one of my web pages (used iframe/object in html). It seems impossible to do this as the authentication string is not passed across, and the screen displayed prompts for manual input of the password. Can I open this automatically in some way?
If I understood well, your solution is insecure regardless it works or not. In this way your clients can see the password of your (or another) site.
I suggest to query the external content using custom php code and display (print) it on your page.
There are several ways to get content of an external page:
https://www.php.net/manual/en/function.stream-context-create.php
https://www.php.net/manual/en/function.curl-init.php
If you need a tutorial for WP plugins check this out:
https://www.wpbeginner.com/wp-tutorials/how-to-create-a-wordpress-plugin/
I uploaded some files into Google Cloud Storage. Now, I would like to view the data in the Google Developer Console under Storage->Cloud Storage->Storage Browser by clicking on the created subfolder. Although having the status Is owner in Project_Name->Permissions I get the error message Failed to load (see attched picture). A collague of mine - having also the permission Is owner - has full access via the browser interface.
So, what do I have to additionally change in order to gain access via the web interface?
I've seen issue like this when having multiple users in a single Chrome Browser. You can manage multiple users on Chrome to avoid possible conflicts, visit the link provided to guide you on how to do this..
Open an incognito window in Chrome and see if the problem persists. If so, like in my case, you can resolve by only having one user logged in to Chrome.
I'm working on a DNN website, I have a user account with Admin privileges but don't have access to the Host Account. I do have FTP access and have been browsing around the file-structure and have seen some files referring to search.
The search is not working on the website so I was hoping I could replace the back-end code which runs the search, via FTP.
What files would need to be replaced to make sure they are not corrupted/buggy.
I realize doing this may not solve the problem, so any other advice as to trouble-shooting or possible solutions are appreciated.
EDIT(For those asking how in what way search does not work):
Here is an image of what happens when I search 'sheep' (the website is all about sheep). Was told by the company that original website that the search runs on our pages 'Keywords'. I've made sure pages contain keywords but they still do not show up in search.
The solution I ended up using for this problem because I could find no other solution without having the Super-User account access. Was to implement Google's Custom Search Engine, with the multi-page option.
http://www.google.com/cse/
In my case the original search engine was working via GET command with a value of q. This is the same as Google's CSE multi-page option. So I was able to simply remove the old search results html from a module and replace it with the html snippet provided by Google.
Trying to copy a website to a new server as the old one is dying. :(
I tried copying over the files and setting it up manually, but some specific user accounts needed to be used and the guy who set all this up left the company nearly 5 years ago. And is even worse at documentation than I am.
Anyway, at that point the ASP pages were serving, but getting errors. Ok, fine... I went back and exported the configuration from the old server (lucky that worked at all) and created a new website from that config on the new server. On the new website, from the config file, the ASP pages are giving 404 errors.
The Active Server Pages extension is enabled, and I can actually get the asp pages to serve from another website on the server... so I'm thinking it's something at the website level. No idea what though.
Any ideas?
Back when I was doing classic ASP development we used Parent Paths. This is at the top of your ASP file you'll see something like;
<!--#include file="../../resource/includes/MSSQLconnection.asp"-->
This isn't enabled by default in IIS. It may not be answer but worth looking at. But was a long time ago now.
Hope this helps,
Mike
404 is a file not found error.
Start by checking you can access a 'hello world' HTML file in the folder using http: //localhost/path/toyour/HelloWorldFile.htm
Hello World
is all you need in the file = you don't need to bother with any HTML markup to test what we're interested in.
This will check that your virtual directories, application settings etc are correct before you move on to the Active Server Page settings.
Once you've got your paths sorted out and you know you are looking for your application in the correct place move on to a 'hello world' ASP file
<%="Hello World"%>
is all you need in that file!
You ask about settings in IIS which will stop ASP from working. These come to mind as the most obvious.
Depending on the OS (or more specifically the IIS version) you may also need to activate ASP pages.
These instructions from msdn cover Windows 2003 (IIS6) and Windows 2008(IIS 7.x)
If you can get your hello world script working you can move on to debugging your application.
It will be a great help when debugging the application if you can see what's going wrong so I recommend that you turn off friendly error messages if you are using Internet Explorer. Also set IIS to pass error messages on to the browser
see:
http: //learn.iis.net/page.aspx/564/classic-asp-script-error-messages-no-longer-shown-in-web-browser-by-default/ --excuse the link formatting but SO's newbies can't post more than 2 hyperlinks in a message was getting in the way of me trying to be helpful and earning enough rep to post more!
(that may only be relevant to IIS 7.x I don't have an IIS6 installation lying around to refresh my memory.
Make sure you are browsing your application on the server using http: //localhost - this should ensure you see any errors
Good luck
A cheeseburger to the first person who can help me make sense of this. I have a page in a Sharepoint app that uses Telerik's RadUpload to upload files. This has worked for months; last week it stopped working (in Internet Explorer, this detail is important). After talking with a co-worker about the problem, I tried the upload with Firefox; it worked. Not only that, all subsequent uploads from Internet Explorer started working. Flash forward an hour, and the aforementioned coworker, on another Sharepoint site, running on different servers, was having problems downloading (using Internet Explorer). Being half serious, half smart-aleck, I said 'try it in Firefox'. Not only did that work, ALL SUBSEQUENT DOWNLOADS IN INTERNET EXPLORER WORKED! And he re-produced this behavior on another machine. My fear is that this a browser issue. All advice will be greatly appreciated.
a
IE will try and present credentials to a server it knows to be in its Local Intranet zone when it tries to connect (depending on the setting of "Automatic logon only in Intranet zone").
Firefox will only present credentials when prompted, and will generally ask you by popping up a box (unless you've configured a list of sites for it to always present NTLM credentials to).
I've seen a similar case with Sharepoint where you can cause IE to work by logging in with Firefox. I theorized it was due to a permission on a remote resource being for "Authenticated Users", and you're causing your user to authenticate by logging in forcefully. We eventually set the "Automatic logon only in Intranet zone" to "Prompt" and it worked. My theory there was that it wasn't detecting the site as being in the Local Intranet zone for some reason. If you're not accessing a domain with no .'s in it, try also setting your Local Intranet site policy to match the full domain of the Sharepoint server, not just *.example.com - I've read that that can help.
Was it as simple as IE not re-downloading miss-cached .js file, maybe, that firefox did download, making IE work after that?
Pretty gnarly to debug.