IIS 7 Logs Vs Custom - iis

I want to log some information about my visitors. Is it better to use the IIS generated log or to create my own in an SQL 2008 db.
I know I should probably provide more information about my specific scenario, but I'd like just generally, pros and cons of either proposal.

You can add additional information to the IIS logs from ASP.NET using HttpResponse.AppendToLog, additionally you could use the Advanced Logging Module to create your own logs with custom filters and custom data including data from Performance Counters, and more.

It all depends on what information you want to analyse.
If you're doing aggregations and rollups then you'd want to pull this data into a database for analysis. Pulling your data into a database will give you access to indexes and better querying tools.
If you're doing infrequent one-off simple queries then LogParser might be sufficient for your needs. However you'll be constantly scanning unindexed flat files looking for data which is I/O intensive.
But as you say, without knowing more about your specific scenario it's hard to say what would be best.

Related

How Alteryx is deploying data in decentralized way?

From the link :https://reviews.financesonline.com/p/alteryx/, I see the following details
Alteryx is an advanced data analytics platform intended to serve the
needs of business analysts looking for a self-service solution. It
contains 3 basic components: Gallery, Designer, and Server, which
blend data from external sources and generate comprehensive reports.
Each of them, however, can be used separately.
The software structures and evaluates data from multiple external
sources, and organizes it into comprehensive insights that can be used
for business deciding and shared with multiple internal/external
users. Basically, Alteryx is deploying data in a decentralized way,
and eliminating in such way the risk of underestimating it. At the
same time, Alteryx is well-integrated, easy to use, and ran both on
premise and in cloud.
Can anyone help me to know what is the text above in bold trying to explain. I am interested to understand it in details with some explanation.
The basic idea of is that the tool can blend just about any kind of data and dump the result to your own local extract... the local extract is "decentralized" in that, obviously it's local, and also you didn't need to rely on some core ETL team to build a process for you (which they would probably dump in a central location). The use of the term "underestimating" probably indicates that, if you're not building in your own insights (say you find something online that you can blend into your analysis), you're "underestimating" the importance of that data.
It's worth noting that your custom extract could be turned into a nightly job and the output could itself be dumped to a centralized database server if desired. So the tool can be used to build centralized assets too. It really just depends on how you're using it. (With Alteryx this would require either their Desktop Automation, or their Server.)
So... it seems that any self service data blending tool would be capable of the same. What's special about Alteryx? The distinguishing factors will lie elsewhere: number of data types supported, overall functionality and power, performance, built-in examples, ease-of-use, service, support, online community, and perhaps other areas.

Sql Azure Monitoring tool

I'd like to collect information about top N queries ranked by execution time. The same as described here: http://msdn.microsoft.com/en-us/library/windowsazure/ff394114.aspx. But I need historical data. Is there any tool for this? Something really simple and cheap. Or it is better to implement this by myself? Like it is described here: http://wasa.codeplex.com/
If you are open to a paid service to do this monitoring, Cotega may be an options for you. We do monitoring and store historical data on your SQL Azure database. We actually previously logged top N queries, but stopped doing it as it was hard for DBA's to use this information. It would be great to hear more about how you would like to see and use this information as it would be pretty easy to add this capability back in.
If you want to do this yourself, there are some great services such as Aditi that can be used to schedule processes that would allow you to create some code to be executed on regular basis to log this yourself.
Full disclosure, I work on the Cotega service.

Windows Azure App Fabric Cache whole Azure Database Table

I'm working on Integration project where third party will call our web service in Azure. For performance reason I would like to store 2 table data (more than 1000 records) on to the app fabric cache.
Could anyone please suggest if this is the right design pattern?
Depending on how much data this is (you don't mention how wide the tables are) you have a couple of options
You could certainly store it in the azure cache, this will cost though.
You might also want to consider storing the data in the http runtime cache which is free but not distributed.
You choice would largely depend on the size of the data, how often it changes and what effect is caused if someone receives slightly out of date data.

Azure Table Storage design question: Is it a good idea to use 1 table to store multiple types?

I'm just wondering if anyone who has experience on Azure Table Storage could comment on if it is a good idea to use 1 table to store multiple types?
The reason I want to do this is so I can do transactions. However, I also want to get a sense in terms of development, would this approach be easy or messy to handle? So far, I'm using Azure Storage Explorer to assist development and viewing multiple types in one table has been messy.
To give an example, say I'm designing a community site of blogs, if I store all blog posts, categories, comments in one table, what problems would I encounter? On ther other hand, if I don't then how do I ensure some consistency on category and post for example (assume 1 post can have one 1 category)?
Or are there any other different approaches people take to get around this problem using table storage?
Thank you.
If your goal is to have perfect consistency, then using a single table is a good way to go about it. However, I think that you are probably going to be making things more difficult for yourself and get very little reward. The reason I say this is that table storage is extremely reliable. Transactions are great and all if you are dealing with very very important data, but in most cases, such as a blog, I think you would be better off just 1) either allowing for some very small percentage of inconsistent data and 2) handling failures in a more manual way.
The biggest issue you will have with storing multiple types in the same table is serialization. Most of the current table storage SDKs and utilities were designed to handle a single type. That being said, you can certainly handle multiple schemas either manually (i.e. deserializing your object to a master object that contains all possible properties) or interacting directly with the REST services (i.e. not going through the Azure SDK). If you used the REST services directly, you would have to handle serialization yourself and thus you could more efficiently handle the multiple types, but the trade off is that you are doing everything manually that is normally handled by the Azure SDK.
There really is no right or wrong way to do this. Both situations will work, it is just a matter of what is most practical. I personally tend to put a single schema per table unless there is a very good reason to do otherwise. I think you will find table storage to be reliable enough without the use of transactions.
You may want to check out the Windows Azure Toolkit. We have designed that toolkit to simplify some of the more common azure tasks.

Combining data from Project Server and SharePoint into a single report

I need to combine data from the Project Server reporting database with data from custom lists in SharePoint workspaces. The results need to be displayed within a single report. How should this be done? Options I've thought of:
Extend the reporting database with the custom list data (if this is possible). Use Reporting Services to display the output.
Query the reporting database and the SharePoint workspaces and combine results in memory. Write custom code to display the output.
Any other ideas? I have the skills to develop this but am very open to purchasing a product if it solves the problem.
I've had this sort of problem as well. My apporach:
Create a Custom reporting Db.
Run regular jobs from the SQL Server to query sharepoint (via WS) and store the results in the db.
i use the ListItemsChangesSinceToken is Lists.asmx to improve effeciency. Also I utilise the sitedataquery tool set. I wrote a really simple interface into it for the ability to call a sitedataquery remotely, returning a dataTable.
Use Reporting Services / any tool to extract and report on the data.
The reason I opted for a staging Db was for
Performance - the WS calls are pretty slow.
Service continuity - if SP is down for any reason or slow then queries will fail.
Hope this helps.
I also found the tool SharePoint Data Miner which appears to do the same as DJ's answer.

Resources