Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am looking for some guidelines on using log4Net in a UI application. Will highly appreciate all the pointers on this.
Along with the others you may want to mention, one thing I specifically want to know is - which approach is better - putting multiple log statements or just one long one.
For e.g.
logger.Info("Server Name: " + serverName);
logger.Info("DB Name: " + dbName);
logger.Info("App version: " + appVersion);
etc
Or
logger.Info(string.Format("Server Name: {0}; DB Name: {1}; App version: {2}",
serverName, dbName, appVersion));
Mainly looking from the performance point of view (rather than readability).
Thanks.
It really depends on what kind of logging you will have. I personnally don't like the multiple logs approach because they can be separated depending on your appender and logging usage, but that is a personnal preference. Without a unifying token (thread id or whatever) it can be difficult to follow what is related in the logs.
As for the timing, #stuartd is right; the difference would be negligible; don't bother with it now, and time your code if you encounter problems later on; i'm pretty sure more interesting areas than the logging will pop up.
One point though: don't format your string directly as an argument to the logging call. log4net has some *Format methods (DebugFormat, ErrorFormat, etc...) that will execute the formatting only if the message should be logged, so it is arguably more efficient.
Some wrapper around log4net even add deferred evaluation of the logging string in order to avoid wasteful evaluation of the parameters in the case they won't be useful, but that's not vanilla log4net.
From a performance point of view, single log events will definitely have some advantage over multiple calls, but how much depends on where you are logging to. If for example you are using a AdoNetAppender, then log calls are batched so there should be almost no difference, but if you are using a FileAppender and writing from multiple threads, then single log events will require less file locking, so there may be a measurable difference.
However performance should not be an issue unless you are logging too much - in your example, the difference would almost certainly be insignificant. However if you are really concerned about performance in your specific usage scenario, you should profile both approaches.
Related
I'm building a StackOverflow clone using event-sourcing. The MVP is simple:
Users can post a question
Users can answer a question
Users can upvote and downvote answers to non-closed questions
I've modeled the question as the aggregate root. A question can have zero or more answers and an answer can have zero or more upvotes and downvotes.
This leads to a massive performance problem, though. To upvote an answer, the question (being the aggregate root) must be loaded which requires loading all of its answers. In non-event-sourced DDD, I would use lazy loading to solve this problem. But lazy loading in event-sourcing is non-trivial (http://docs.geteventstore.com/introduction/event-sourcing-basics/)
Is it correct to model the question as the aggregate root?
Firstly don't use lazy loading (while using ORM). You may find yourself in even worse situation, because of that, than waiting a little bit longer. If you need to use it, most of the times it means, that your model is just simply wrong.
You probably want to think about things like below:
How many answers to the question you expect.
What happens, if someone posted an answer while you were submiting yours. The same about upvotes.
Does upvote is just simply +1 and you don't care about it anymore or you can find all upvotes for user and for example change them to downvote (upvotes are identified).
You probably want to go for separate aggregates, not because of performance problems, but because of concurrency problems (question 2).
According to performance and the way your upvote behave you may think about modeling it as value object. (question 3)
Go ahead and read it http://dddcommunity.org/library/vernon_2011/
True performance hit you may achieve by using cqrs read/write separation
http://udidahan.com/2009/12/09/clarified-cqrs/
With a simple read model it should not be a problem. What is the maximum number of answer you would expect for a question? Maybe a few hundred which is not a big deal with a denormalized data model.
The upvote event would be triggered by a very simple command with a handful of properties.
The event handler would most likely have to load the whole question. But it is very quick to load those records by the aggregate root ID and replay the events. If the number of events per question gets very high (due to answer edits, etc) you can implement snapshots instead of replaying every single event. And that process is done asynchronously which will make the read model "eventually consistent" with the event store.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
(not sure if this is the right forum for this question)
I am very curious about how search in major site, say youtube/quora/stackexcahnge, works?
And I'm NOT looking for an answer like 'They Use Lucene Search engine'. I want to understand exactly how the indexing works there.
Is there a different Index for text search than the autocomplete feature?
Is it done in the background like map reduce.
How exactly does map reduce help deliver results? (I know that it counts words in each document but what happens after that when I search for a keyword?)
I also heard that google stopped using map reduce and now using cloud dataFlow here - how does that work?
Help Please :-)
I voted to close, because I think your question is too broad. Each bullet could form the basis of an SO question. That stated, I'll take a crack at answer how SolrCloud attempts to solve each of the problems you are asking about:
Is there a different Index for text search than the autocomplete feature?
The short answer is "yes". Solr has several options for implementing an autocomplete feature and all of them rely on either building a separate index or being supplied a separate dictionary. You can also roll your own in an even more sophisticated fashion as the blog post "Super flexible AutoComplete with Solr" demonstrates.
Is it done in the background like map reduce?
Generally speaking no. SolrCloud is based on the idea of shards with leaders and replicas. A shard being a subset of your overall index. With a shard being comprised of a leader and possibly one or more replicas.
Queries are executed against all shard leaders. With assigning a particular shard to serve as the aggregator of each shard's response, but unlike map reduce where the individual node responses have all the data the reducing node needs, the aggregating Solr shard may make multiple requests back to the other shards to figure out sort order - for example.
How exactly does map reduce help deliver results? (I know that it counts words in each document but what happens after that when I search for a keyword?)
See my response to your previous question. In short the query is executed against each shard, aggregated by one of those shards, and returned to the requestor. What Solr does - Lucene really - that's the useful magic part that people most often associate with it is Term Frequency Inverse Document Frequency indexing usually with stemming on text searches. While this is not exactly what happens under the hood, and you can vary what's actually done via configuration, it provides a fairly good idea of what's being done.
Other searching, on dates and numbers, or simple textual values is done in a fashion similar to database indexing. That is a simplification, if you want to understand it more fully read the JavaDoc on NumericRangeQuery for an in-depth explanation.
I also heard that google stopped using map reduce and now using cloud dataFlow here - how does that work?
If I knew the answer to that I would probably be working for Google and not answering StackOverflow questions :). Seriously whatever they've built is new PhD level work that as far as I know they haven't even release a research paper on, which is what they did with map reduce that led to Yahoo building Hadoop.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As long as it's SQL injection proof, would it be alright for me to let non-members add comments to a post and give the Author the ability to delete them?
Before you do it, consider the following questions
(and any other questions specific to your project that may spring to mind)
Do you have a good rate-limiting scheme set up so a user can't just fill your hard drive with randomly-generated comments?
Do you have a system in place to automatically ban users / IP addresses who seem to be abusive? Do you have a limit on the number / number of kilobytes of comments loaded per page (so someone can't fill a page with comments, making the page take forever to load / making it easy to DoS you by making a lot of requests for that page)?
Is it possible to fold comments out of sight on the webpage so users can easily hide spammy comments they'd rather not see?
Is it possible for legitimate users to report spammy comments?
These are all issues that apply to full members, of course. But it also matters for anonymous users, and since anonymous posting is low-hanging fruit, a botmaster would be more likely to target that. The main thing is simply to consider "If I were a skilled programmer who hated this website, or wanted to make money from advertising on it, and I have a small botnet, what is the worst thing I could do to this website using anonymous comments given the resources I have?" And that's a tough question, which depends a great deal on what other stuff you have in place.
If you do it, here are a few pointers:
HTML-escape the comments when you fetch them from the database before you display them, otherwise you're open to XSS.
Make sure you never run any eval-like function on the input the user gives you (this includes printf; to do something like that you'd want to stick with printf("%s", userStr);, so printf doesn't directly try to interpret userStr. If you care about why that's an issue, google for Aleph One's seminal paper on stack smashing),
Never rely on the size of the input to fall within a specific range (even if you check this in Javascript; in fact, especially if you try to ensure this in Javascript) and
Never trust anything about the content will be true (make no assumptions about character encoding, for example. Remember, a malicious user won't need to use a browser; they can craft their calls however they want).
Default to paranoia If someone posts 20 comments in a minute, ban them from commenting for a while. If they keep doing that, ban their IP. If they're a real person, and they care, they'll ask you to undo it. Plus, if they're a real person, and they have a history of posting 20 comments a minute, chances are pretty good those comments would be improved by some time under the banhammer; no one's that witty.
Typically this kind of question depends on the type of community, as well as the control you give your authors. Definitely implement safety and a verification system (eg CAPTCHA), but this is something you'll have to gauge over time more often than not. If users are being well-behaved, then it's fine. If they start spamming every post they get their hands on, then it's probably time a feature like that should just go away.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What's a good way to capture user stories when you have features that are common across multiple UI modes?
For example, imagine a commercial flight information system, something someone might use to answer the question "When is flight UA211 expected to land?"
As is often the case, the feature of providing schedule information is common underlying functionality, even though you might ask for it via a desktop web browser, a mobile browser (where you want to apply different style to make it more usable), and maybe even via SMS shortcodes.
Now, that certainly could be a single user story ("As someone meeting a traveller, I want to see flight arrival information so that I can be at the airport on time"). But that seems wrong (and would probably be an epic story, anyway).
You can make it separate user stories ("As a desktop user...", "As a smartphone user...", etc), which I've done in the past, and the team just knows to estimate the first one to include all of functionality, and the subsequent ones to estimate only UI implementation.
A third option is to make the underlying functionality a story isolated from the presentation layer, and then have UI stories: "As a flight info system front end, I want to get flight status information so that I can present it to the user", "As a desktop user, I want to see flight arrival... etc". But that seems artificial.
Thoughts?
dwh
I think the problem is that you are trying to tie the UI functionality to the backend too tightly.
For example, if you break it into a simple story:
A user may want to know the flight status given the flight number.
OK, now, given that you implement that, now you can look at which platforms will be calling this, as, one part of agile is not to over-develop, but in this case, if you have a business need to support mobile and desktop devices, then you should look at implementing this as a REST service, since that is the simplest solution for both to work with.
So the REST service solves the first story above.
Now, you will find that there are other specifics for each platform. For example, is there something on the phone that may already have the information, for example, did the traveller go to a trip site and already enter his info, then you may want to go there, assuming that the traveller is in the users contacts.
Or, if the user is just going to enter a flight number and that is it, then why not just do it as a webpage, as that is the simplest approach that supports both concepts. Then, if you have a url that supports GET, and outputs as HTML then you can easily display.
So, my first story was too simple, you may want to consider whether it is possible to return different types of data, so a user may want to have HTML, PDF, json or xml, but for each of these there should be a business need.
Unfortunately it is hard to answer your question as there are too many unknowns, which is why you are having a difficult time. If you ask the wrong question then you do have an epic, but if you can just break it down to a few simple stories then it becomes much easier to solve.
I would recommend the second option.
As you suggested, the first sounds like too much for a single story, and a story should always fit into a single iteration.
With the third option, the big problem is that you aren't delivering business value at the end of the story, which is generally a bad practice.
There are other ways you could split this work though. You could initially develop a very cut-down, barebones version which would work across all clients, and then refine each of them in subsequent stories.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
How flexible should a programmer be if a client requests requirements that is not in the project scope?
General perspective:
You need to earn a living; the client needs a computing solution: the client has the right to make sure that the solution you will supply fits his needs. Changes and additions after and agreement has been reached, reflects on your ability to analyze the user's requirements into a system design, in that failed to investigate those requirements to sufficient depth and detail: you need to do this meticulously and obtain a written sign-off agreement on your system design from the client.
Legal perspective:
You should pin the scope of the project down, and get the client to sign an agreement of that scope. Once you have that agreement, anything not covered by it constitutes a new project.
Business perspective:
Do you want to continue doing business (with the current as well as future clients)? You need to do an evaluation of the impact adding the new required functionality will have on the current project: if the impact is small, then do it, but tell the client - in writing - that you are doing him a favor; if the impact is larger then you must negotiate with the client, outlining the issues, and either adapt your current agreement, or make a new one. What you do not want to do is to antagonize your client.
Lastly: "The client is always right." - (up to the point where you have to give up and just go away.)
This question cannot be given a blanket answer. It depends project to project.
Examples:
Client has money to burn, long timeline, no other projects on the go, I am very flexible.
Client is tight with $$, short timeline, other projects on the go, I am hardly flexible at all.
Other factors come into play as well, such as the process that has been chosen for the project. For example, you will be more flexible in an agile process, less flexible in a waterfall approach.
I think the answer to your question comes down to how flexible your client is with time and cost because you can not change the scope of a project with out having an affect of these two things.
Scope creep can be a good thing if it allows the project to evolve and has an overall positive affect on the outcome of a project. You really need a formal change process in place to manage scope changes.
If it's a fixed-bid project, then I am open to negotiation and will agree to expanding the scope in one area in exchange for reducing it somewhere else, or for an increase in budget, or in exchange for some other consideration.
If it's for a client that I'm billing hourly, then they can expand the scope all they want, since I'll be charging them for the time I spend on it regardless of whether it's within the original definition of the project or not.
Define in advance a list of functions that the system will perform.
If the client adds a new function then increment the cost and time accordingly.
If the client decides to leave a function out of the scope then decrement the cost and time if you have not implemented it yet.