first time using SSRS web server, setting up website - sharepoint

its my first time using this and as a newbie I have many questions. Any help are appreciated.
My ultimate goal is to have reports created from database that will be able to be accessed by other end users on website so they can view/filter the report data online in a shared way with some user control settings.
So I have already made my reports in the visual studio linking to databases.
And I have also set up the Reporting Service Configuration Manager so that I can access SSRS home page and the site setting at http://'127.0.0.1'/Reports/Pages/Folder.aspx
Now my question is, how will the other end users be able to get onto the website and get access to the reports I created with SSRS? Do I upload the reports in .rdl on my report site manually or do I deploy it from VS? How do I turn my '127.0.0.1/Reports' into a public site for other user's access? Or do I have to create it using a sharepoint?
Thanks so much, I need a guidance to head toward the right direction! :)

Now my question is, how will the other end users be able to get onto the website and get access to the reports I created with SSRS?
Users will need 2 things from you to access the site: the server name/address, and a means of authenticating to it. By default, authentication is handle via Windows domain auth (which you can change, with varying degrees of effort...).
Do I upload the reports in .rdl on my report site manually or do I deploy it from VS?
It actually makes no difference in the end; do whichever you find easier. (There are also plenty of other ways to deploy reports, such as through powershell!)
How do I turn my '127.0.0.1/Reports' into a public site for other user's access?
Well you're halfway there - At this point, you could probably open up your firewall (port 80, maybe 443 depending on your config), and have people connect to your computer via IP or hostname - for example, if your computer's IP was 12.34.56.78, they could visit 12.34.56.78/Reports/ and access the site. If you have a means of creating a URL and pointing it to your SSRS server, you might need to open the configuration manager again and bind that URL to SSRS.

Related

Stop people browsing images on website

I'm pretty much new to IIS and this is the problem that I'm trying to solve.
I have a legacy web app (complied so not easily changeable) that is running in IIS 7.
This web app displays images to the user of possibly sensitive personal data but if the user views the source the of the page then they are exposed to the url which is viewable directly through any browser.
What I think that I need to do is to remove the folder permissions (where the images live) to stop people directly browsing vewing them and then create another account that has the permissions to view this folder and then associate this newly created account with the application pool that is running the site.
So my questions are would this be the correct way of doing this? And if it is, how would I actually achieve (bearing in mind that I'm a complete novice in IIS and permissions).
Any help would be muchly appreciated.
Thanks,
Craig

VS2012 - How to GET all files on an existing site?

Alright, I am obviously missing something here. I have moved several websites over to Azure to take advantage of all that it has to offer. Traditionally our team has always used DreamWeaver to ftp up/down and such. What I don't understand is how I go about getting hooked up to an EXISTING site on Azure. I can easily setup and web deploy to a NEW site, but I am trying to give the rest of the team access to the sites I have setup and I am lost as to how to approach this.
I have tried the File > Open Web Site route, and the issue with that is it never then saves the project/info anywhere in VS, and we are required to hook back up to it each time.
All of our local sites are on a shared network drive, so we all access the same local resources. I thought I could simply pass them all the publish profiles and they could then import, get, and then edit and publish files... but it never gives the option to "get all files" from the server.
Hope this makes sense?! Thanks in advance! :)
For multiple developer scenarios, it would be in your best interest to use a source control system such as Git or TFS. This will allow you not only to share the source across team members, but also give you the benefit of tracking changes and merging files that are modified across team members.
If you aren't comfortable with source control, you do still have access to the files via FTP or Secure FTP.
You could also use WebMatrix which has the concept of download from server built directly into the tooling.

Excel 2007 Pass-Through Windows Authentication

I've created a simple (asmx) web service which returns a DataSet.
I've added the webservice to my Excel 2007 workbook using the Data -> From Web button and I'm able to view / refresh the data.
The problem comes when I need to secure the web service: I've turned on Windows authentication for the web service and the request uses SSL.
Unfortunately, the user's logged on windows credentials aren't used by Excel when trying to refresh the data - the refresh fails.
If I click on Data -> Connections -> Properties -> Definition -> Edit Query, only then am I prompted for my windows credentials and does the refresh then succeed.... not a problem for me, but not something I want every user of this spreadsheet to have to do... any ideas how to make the prompt come up when the refresh is attempted instead of having it fail??
Thanks!!
Update Answers so far are to do with SharePoint and Excel Services (neither of which are any use to me)... and one link for which "The following procedure does not apply to data that is retrieved from a text file or a Web query"... I just want a person with a copy of excel on his desktop machine to be able to update from a password-protected web service... is that so hard Microsoft??
Another Update Still no answers accepted - because no answers so far have provided a working solution ( Nice googling though - thanks guys ;-) )
While I haven't got SSL I can attest that Excel normally shouldn't ask you for authentication when using pass through authentication.
My guess is that you will need to add the destination website (with the https) to your trusted zone in IE. The effect should be that when you go to the website you shouldn't be challenged for your password at all. IE will now pass through the authentication credentials because the destination is in the trusted zone.
Once this is fixed Excel should treat it like a normal website.
Here's a link which talks you through adding your site to the trusted zone: http://www.nateirwin.net/2007/01/19/enabling-ntlm-authentication-in-firefox-and-internet-explorer/
The last time I dealt with this issue was in 2004. If I remember correctly, this is a bug in the Web Query technology in how the query deals with the SSL certificate. This is Excel 97 technology; therefore, fairly basic implementation.
After much research and troubleshooting, the only way around this issue is to create user and password parameters and post the web query. Using POST will keep the user/password hidden from prying eyes.
Following is my note from 2004: There is a problem with https, application/vnd.ms-excel, Internet Query (iqy), and Excel 2000/2002.
Have you checked out this question: What do I need to do to make Excel access a Web Query via HTTPS?
Excel's Web Queries Enable You to Populate Worksheets from Web Sites at http://msdn.microsoft.com/en-us/library/aa155714(v=office.10).aspx.
Sites requiring authentication and passwords provide additional
challenges. They may require coded workarounds or may be unsolvable.
Error message when you use Web query to a secure Web page () in Excel: "Unable to open" at http://support.microsoft.com/kb/290347.
XL97: How to Create Web Query (.iqy) Files at http://support.microsoft.com/kb/157482 is an invaluable resource. (There was a Web Query SDK once that I cannot find, but this article is a good replacement.)
Different Ways of Using Web Queries in Microsoft Office Excel 2003 at .
I don't know if this will help, but I faced a similar situation while importing data from a remote SQL Server Database. What I did was create a role inside the database itself, and assign any users who needed access to that role.
The data is updated into the workbook when the file is loaded using Microsoft Query, so I don't know how that might differ from how you have done things.
The biggest issue with doing it this way was to open the properties for the query and check the "Use Trusted Connection" box. This worked without an issue for me. Again, this was from a remote server, not a secure website. Hope this helps.
i hope this will help you : Refresh connected imported data
We had a similar situation at work, however, we are using Office 2010. I'm not sure of the limitations of 2007. Check out these links. The last two are specifically for Excel 2007.
Link 1: Configure Secure Store Service for Excel Services
Link 2: Ten Tips for Using SharePoint Server 2007 with Excel Services
Link 3: Plan external data connections for Excel Services

How do I make it possible for SSRS 2008 reports to be viewed by everyone on the web without logging in?

I have SSRS setup and working fine. I can even access them from a web browser. The only problem is that it requires me to log in every time I want to review a report. I need anonymous users to be able to view these reports. Is this possible?
Check out this post, it states that anonymous access is no longer supported in SSRS2008, but can still be enabled. What would be easier though is just adding the ssrs site to your local intranet zone in IE, it will then log in automatically.
post
The easiest solution I've found for my uses of it is to set up automated subscriptions where SSRS pushes the reports to a file share at regular intervals, then I have a web-app front end that has access to the file share where the reports are pushed and the web app dynamically generates a front end for whatever reports are on that file share. This can be fairly easy to design and set up as long as you don't need the reports to be regenerated on the fly, which gets a bit more involved.

MOSS 2007 Crawl

I'm trying to get crawl to work on two separate farms I have but can't get it to work on either one. They both have two WFE's with an additional WFE configured as an Index server. There is one more server dedicated for Query and two clustered SQL 2005 back end servers for the database. I have unsuccessfully tried at least 50 different websites that I found with solutions from a search engine. I have configured (extended) my Web App to use http://servername:12345 as the default zone and http://abc.companyname.com as the custom and intranet zones. When I enter each of those into the content source and then try to run a crawl, I get a couple of errors in the crawl log:
http://servername:12345 returns:
"Could not connect to the server. Please make sure the site is accessible."
http://abc.companyname.com returns:
"Deleted by the gatherer. (The start address or content source that contained this item was deleted and hence this item was deleted.)"
However, I can click both URL's and the page is accessible.
Any ideas?
More info:
I wiped the slate clean, so to speak, and ran another crawl to provide an updated sample.
My content sources are as such:
http://servername:33333
http://sharepoint.portal.fake.com
sps3://servername:33333
My current crawl log errors are:
sps3://servername:33333
Error in PortalCrawl Web Service.
http://servername:33333/mysites
Content for this URL is excluded by the server because a no-index attribute.
http://servername:33333/mysites
Crawled
sts3://servername:33333/contentdbid={62a647a...
Crawled
sts3://servername:33333
Crawled
http://servername:33333
Crawled
http://sharepoint.portal.fake.com
The Crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly.
I double checked for typos above and I don't see any so this should be an accurate reflection.
One thing to remember is that crawling SharePoint sites is different from crawling file shares or non-SharePoint websites.
A few other quick pointers:
the sps3: protocol is for crawling user profiles for People Search. You can disregard anything the crawler says about it until you're ready for user profiles.
your crawl account is supposed to have access to your entire farm. If you see permissions errors, find the KB article that tells you the how to reset your crawl account (it's a specific stsadm.exe command). If you're trying to crawl another farm's content, then you'll have to work something else out to grant your crawl account access. I think this is your biggest issue presently.
The crawler (running from the index server) will attempt to visit the public URL. I've had inter-server communication issues before; make sure all three servers can ping each other, and make sure the index server can reach the public URL (open IE on the index server and check it out). If you have problems, it's time to dirty up your index server's hosts file. This is something SharePoint does for you anyway, so don't feel too bad doing it. If you've set up anything aside from Integrated Windows Authentication, you'll have to work harder to get your crawler working.
Anyway, there's been a lot of back and forth in the responses, so I'm just shotgunning a bunch of suggestions out there, maybe one of them is on target.
I'm a little confused about your farm topology. A machine installed as a just a WFE cannot be an indexer. A machine installed as "complete" can be an indexer, query and/or a wfe...
Also, instead of changing the default content access account, you may want to add a crawl rule instead (once everything is up and running)
Can you see if anything helpful is in the %commonprogramfiles%/microsoft shared/web server extensions/12/logs on your indexer?
The log file may be a bit verbose, you can search for "started" or "full" and that will usually get you to the line in the log where your crawl started.
Also, on your sql machine, you may be able to get more information from the MSScrawlurlhistory table.
Can you create a content source for http://www.cnn.com and start a full crawl? Do you get the same error(s)?
Also, we may want to take this offline, let me know if you want to do that.
I'm not sure if there is a way to send private messages via stackoverflow though.
Most of your issues are related to Kerberos, it sounds like. If you don't have the infrastructure update applied, then Sharepoint will not be able to use kerberos auth to web sites w/ non default (80/443) ports. That's also why (I would bet) that you cannot access CA from server 5 when it's on server 4. If you don't have the SPNs set up correctly, then CA will only be accessible from the machine it is installed on. If you had installed Sharepoint using port 80 as the default url you'd be able to do the local sharepoint crawl without any hitches. But by design the local sharepoint sites crawl uses the default url to access the sharepoint sites. Check out http://codefrob.spaces.live.com/blog/cns!7C69E7B2271B08F6!363.entry for a little more detail on how to get Kerberos & Sharepoint to work well together.
In the Services on Server section check the properties for the search crawl account to make sure it is set up, and that it has permissions to access those sites.
Thanks for the new input!
So I came back from my weekend and I wanted to go through your pointers and try every one and then report back about how they didn't work and then post the results that I got. Funny thing happened, though.
I went to my Indexer (servername5) and I tried to connect to Central Admin and the main portal from Internet Explorer. Neither worked. So I went into IIS on ther Indexer to try to browse to the main portal from within IIS. That didn't work either and I received an error telling me that something else was using that port. So I saw my old website from the previous build and I deleted it from IIS along with the corresponding Application Pool. Then I started the App Pool for the web site from the new build and browsed to the website. Success. Then I browsed to the website from the browser on my own PC. Success again. Then I ran a crawl by the full URL, not the servername, like so:
http://sharepoint.portal.fake.com
Success again. It crawled the entire portal including the subsites just like I wanted. The "Items in index" populated quickly and I could tell I was rolling.
I still cannot access the Central Admin site hosted on servername4 from servername5. I'm not sure why not but I don't know that it matters much at this point.
Where does this leave me? What was the fix?
I'm still not sure. Maybe it was the rebuild. Maybe as soon as I rebuilt the server farm I had everything I needed to get it to work but it just wouldn't work because of the previous website still in IIS. (It's funny how sloppy a SharePoint un-install can be. Manual deletion of content databases, web sites, and application pools seem necessary and that probably shouldn't be the case.)
In any event, it's working now on my "test" farm so the key is to get it working on the production farm. I'm hopeful that it won't be so difficult after this experience.
Thanks for the help from everyone!

Resources