Classes with reserved names (keywords). How do you deal with this? [closed] - naming

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
it just so happens that my class is a (Medical)Service, do you have any suggestions on what I would name a Service in angular that retrieves medical services?
So far I have thought of two things:
MedicalServicesService sounds a bit weird or not?
I was thinking of maybe replacing "(medical) service" by a synonym, maybe "medical assistance". Then I would have MedicalAssistanceService as the (programming) service name.
Still, surgery hardly passes for a medical assistance. It really is a medical service.
Was curious what people do when one of their classes names just happens to use a programming keyword. A question open for debate.
Not sure if naming conventions questions are allowed here? I will gladly delete my question if they aren't. Thank you.

First off, this depends a lot of the language.
TALK TO YOUR TEAM
Secondly, and most important! Conventions may change a lot depending on your team. Talk to your team, and agree on one convention to use. Never forget this.
Conventions
There are universal conventions/standards to follow, and like you mentioned, using keywords is usually a bad-ish idea. I for one try to avoid them, just as I avoid using digits in my variable-names, even if the specific language I'm working with allows it. Reason? It's easier to stick with "let's avoid problems by mixing rules between languages" than having to check the rules each time.
I often spend 30 minutes thinking of the perfect variable name, so I am quite used to this kind of pondering.
Length
Excessively long variable names is bad, because it hinders flow reading, while excessively short variable names are also bad because it is hard to guess what word you want. You could call it srvc, sure, but who will know what that means in a month (unless you comment it, sure). Dropping the vowels in user-variables is quite common, actually, especially in low-level/old languages.
Specific case
As for this specific example, I wouldn't think of MedicalService as a keyword. First off, it's part of a longer name, like MedicalFile doesn't look like a file from the system at all, but rather a form with medical data on it.
I don't exactly know what this MedicalService does, but it seems like a generic (abstract, probably) class name for services that you can ask for at the counter of a hospital, so I'm assuming that.
GenericMedicalThingToDo is a funny way to avoid the keyword, but I wouldn't call it that. MedicalUseCase seems quite better, and gets to the point.
On the other hand, if this is just a string stating the use case for whatever it is the user has chosen (considering you mentioned Angular), I would just stick with userMedicalChoice (drop the PascalCase to camelCase).
If you need to use a word that is actually a keyword, which often happens, you might want to add a _ on the end or the beginning of it. This is not usually good for interfaces, as it's conventional to only use those internally/privately. Some conventions use double _ for private, and single _ to avoid dupes.
Last point:
Having keywords as part of a longer variable name is not a problem in any of the many programming languages I have sailed in, so just call it MedicalService, or GenericMedicalService if you're going to subclass it.
PS: Read up on some conventions of different languages, like PEP-8, and PEP-256 from Python, or Google's C++ conventions. While not specifically being valid for all languages, they do give you something pin-pointers to what is important.

Related

What is the difference between Code written in VB.NET and C#? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Can anybody tel me the difference by considering all the factors like Execution time,Efficiency.etc
Which is effective ?
VB.NET is a "friendly" programming language. It supports dynamic programming right out of the box, no need to explicitly type your variables for example. Data conversions are automatic. Overflow checking is on by default. Passing properties by reference just works. You can assign an int to a byte without a cast. You can create a multi-window Winforms app without ever really understanding object-oriented programming. The compiler auto-generates a bunch of code.
None of this comes for free. In some cases, the extra overhead can be very substantial. Simply adding two numbers can be three times more expensive than needed, the overflow checking is pretty deer. Automatic conversions between a string and a number are a frequent wart in a VB.NET program, very expensive. You don't stand much of a chance to identify such a bottleneck from just looking at the source code.
C# is much stricter, it (almost) never generates code that hides execution cost under the floor mat. It thus makes it automatically easier to write performant code. This does not otherwise completely avoid having to use a profiler to identify a bottle-neck.
I'd like to expand upon both answers given so far. They are both correct. The problem with VB.NET is typically the developer's mindset AND the flexibility of the VB.NET language.
If you use Option Explicit On, Option Strict On (Option Strict On enables Option Explicit) and do not use Option Infer you will get better results at the expense of more complicated code. By complicated I mean you have to correctly cast your variables and objects, something that maybe considered complicated for a BASIC developer.
Option Strict Information: http://support.microsoft.com/kb/311329
Option Infer On, should not be used 99.99% of the time when writing new applications. I would say 100% of the time, but someone will have a legitimate reason, I just can not think of any.
Option Infer Information: http://msdn.microsoft.com/en-us/library/bb384665.aspx
There should be none, because they both compile down to the same language. The biggest variable factor is the programmer - they may do things in a more roundabout or inefficient way (for example, I can imagine that VB.NET programmers coming from a VB(A) background tend to solve problems differently from C# programmers coming from a C(++) background).
If you want to be sure, take a piece of code and inspect the IL.

What is a hack? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I use the term all the time... but I was just sort of thinking that I don't really have a solid denotational sense behind the term (or at least the term in the sense I want to discuss here). I'm interested in the sense of the word related to code, not the anthropomorphic idea. I'm also not interested here in the sense of the word related to intentional malicious computing (i.e. a hack to unlock secret powers in a game). What I want to explore is what it means to 'hack' in terms of writing software to solve a problem
wikipedia's def of 'hack' to me is a bit vague, but a decent starting point. It considers a hack
can refer to a solution or method which functions correctly but which is "ugly" in its concepion
works outside the accepted structures and norms of the environment
is not easily extendable or maintainable
can be slang for "copy", "imitation" or "rip-off."
These traits of a hack conform to my usage of the word--when applied to code it is always a term of derision. To my mind, a hack
Is likely to be difficult to maintain & hard to understand in the context of the rest of the code.
Is likely to cause failure of the app.
tends to indicate a poor understanding by the coder either of the problem space, usage of the language or both
tends to be the byproduct of aggressive schedules
suggests potential changes in requirements that have not been fully incorporated into the architecture of the solution (requiring an 'inorganic' workaround).
smells
all bad, bad, bad. To me, a hack in this sense is always negative, indicating either lack of time, incompetence, or sloth on the part of the developer, though a decent percentage of hacks must be written to compensate for ill-conceived designs or systems that have gained requirements which their original design cannot handle 'organically'.
I don't think I've really captured it totally though--it's like pornography a bit: I can't really define it, but I know it when I see it. So I ask you: what does it mean to 'hack' when you are trying to solve a problem in software?
I've always preferred Paul Graham's definition:
To add to the confusion, the noun "hack" also has two senses. It can be either a compliment or an insult. It's called a hack when you do something in an ugly way. But when you do something so clever that you somehow beat the system, that's also called a hack. The word is used more often in the former than the latter sense, probably because ugly solutions are more common than brilliant ones.
From the Jargon File, the glossary of hacker slang:
The Meaning of ‘Hack’
“The word hack doesn't really have 69 different meanings”, according to MIT hacker Phil Agre. “In fact, hack has only one meaning, an extremely subtle and profound one which defies articulation. Which connotation is implied by a given use of the word depends in similarly profound ways on the context. Similar remarks apply to a couple of other hacker words, most notably random.”
Hacking might be characterized as ‘an appropriate application of ingenuity’. Whether the result is a quick-and-dirty patchwork job or a carefully crafted work of art, you have to admire the cleverness that went into it.
An important secondary meaning of hack is ‘a creative practical joke’. This kind of hack is easier to explain to non-hackers than the programming kind.
When I think of "hack", I think of it as being a non-expected workaround to solve a problem, not necessarily a bad thing. Creative, innovative, and well-placed. "Hack" can apply to more than just computers, though I seldom hear it used that way.
Too often "hack" simply means: "Not the way I would do it."
This topic will turn into something like a question about Love. Everyone's gonna have their own definition. The best way to know the proper (default) definition is in the dictionary
It's when you've stepped out of the idiomatic, natural, sensible and (sometimes) supported ways of doing something in a given language/framework/etc.
Sometimes that's a stroke of genius, usually it's an act of idiocy, occasionally it's one disguised as the other, and on rare occasions it's both.
(Incidentally, the judge who coined that statement about pornography you quote later retracted in making another ruling).
When I use the term 'hack' it usually refers to a solution to a problem that was done usually in response to a pressing issue, and so not a lot of thought went into it in regards to the overall design of the application. Sometimes it works out, sometimes not so much, and sometimes it turns out to be a work of genius. But mainly, it's an admitted temporary solution that (hopefully) gets refactored and refined when possible.
Here's a great sentence I saw about the difference between hacking and scamming and it says, "Hacking attacks are successful when the criminal knows how a particular computer system works. Scams are successful when the perpetrator knows how the human brain works.", which brings the idea out that to hack into something, you need to have a deep understanding of how it works.

How to counter the "one true language" perspective? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How do you work with someone when they haven't been able to see that there is a range of other languages out there beyond "The One True Path"?
I mean someone who hasn't realised that the modern software professional has a range of tools in his toolbox. The person whose knee jerk reaction is, for example, "We must do this is C++!" "Everything must be done in C++!"
What's the best approach to open people up to the fact that "not everything is a nail"? How may I introduce them to having a well-equipped toolbox, selecting the best tool for the job at hand?
As long as there are valid reasons for it to be done in C++, I don't see anything wrong with this monolithic approach.
Of course a good programmer must have many different tools in his/hers toolbox, but these tools don't need to be a new language, it can simply be about learning new programming paradigms.
As much as I've experienced actually, learning many different languages doesn't make you much of a better programmer at all.
This is also true with finding the right language for the job. Yeah ok, if you're doing concurrency you might want a functional language rather than an Object Oriented language, but what are the gains of using another programming language?
At the end of the day; "Maintenance".
If it can be maintained without undue problems then the debate may well be moot and comes down to preference or at least company policy/adopted technology.
If that is satisfied then the debate becomes "Can it be built efficiently to be cost effective and not cause integration problems?"
Beyond that it's simply the screwdriver/build a house argument.
Give them a task which can be done much easily in some other language/technology and also its hard to do it the language/technology that he/she is suggesting for everything.
This way they will eventually search for alternatives as it gets harder and harder for them to accomplish the task using the language/technology that they know.
Lead by example, give them projects that play to their strengths, and encourage them to learn.
If they are given a task that is obviously better suited for some other technology and they choose to use a less effective language, don't accept the work. Tell them it's not an appropriate solution to the problem. Think of it as no different then them choosing Cobol to take the replace of a shell script -- maybe it works, but it will be hard to maintain over time, take too long to develop, require expensive tools, etc.
You also need to take a hard look at the work they do and decide if it's really a big deal or not if it's done in C++. For example, if you have plenty of staff that knows that language and they finished the task in a decent amount of time, what's the harm? On the other hand, if the language they choose slows them down or will lead to long term maintenance problems they need to be aware of that.
There are plenty of good programmers who only know one language well. That fact in and of itself can't be used to determine if they are a valuable member of a team. I've known one-language guys who were out of this word, and some that I wouldn't have on a team if they worked for free.
Don't hire them.
Put them in charge of a team of COBOL programmers.
Ask them to produce a binary that outputs an infinite Fibonacci sequence.
Then show them the few lines (or 1 line, depending on the implementation) it takes in Haskell, and that it too can be compiled into a binary so there are better ways forward.
How may I introduce them to having a
well-equipped toolbox, selecting the
best tool for the job at hand?
I believe that the opposite of "one true language" is "polyglot programming", and I will then refer to another answer of mine:
Is polyglot programming important?
I actually doubt that anybody can nowadays realize a project in one and only one language (even though there might be exceptions). The easiest way to show them the usefulness of specific tools and languages, is then to show them that they are already using several ones, e.g. SQL, build file, various XML dialect, etc.
Though I embrace the polyglot perspective, I do also believe that in many area "less is more". There is a balance to find between the number of language/tools, the learning curve, and the overall productivity.
The challenge is to decide which small set of languages/tools fit nicely together in your domain and will push productivity and creativity to new limits.
Give them a screwdriver and tell them to build a house?

For what reasons do some programmers vehemently hate languages where whitespace matters (e.g. Python)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
C++ is my first language, and as such I'm used to whitespace being ignored. However, I've been toying around with Python, and I don't find it too hard to get used to the whitespace rules. It seems, however, that a lot of programmers on the Internet can't get past the whitespace rules. From what I've seen, peoples' C++ programs tend to be formatted very consistently with respect to whitespace (or else it's pretty hard to read), so why do some people have such a problem with whitespace-based languages like Python?
It violates the Principle of Least Astonishment, because we have it ingrained in ourselves (whether for good or bad) that whitespace Does Not Matter in a programming language. Whitespace is one of those issues that has been left up to personal style.
I still have bad memories back from being a student of learning the hard way that 8 spaces is not equivalent to a tab in a Makefile... Ah, the sleep I lost...
The only valid reason I have come across is that refactoring using cut-and-paste (not copy) without refactoring tools (or syntax-aware cut-andpaste), can end up changing semantics if an easy mistake is made.
There are several different types of whitespace (spaces, tabs, weird unicode characters, carriage returns, line breaks, etc.), they aren't necessarily visually distinct, and languages and editors may treat them capriciously. This isn't an argument against well-designed whitespace semantics, but many people are against all forms of it simply because of the possibility of poor design.
People hate it because it violates common sense. Not a single one of the replies I have read here decided that it was ok to simply forgo periods and other punctuations. In fact the grammar has been very good. If the nonsense about indentation actually carrying the meaning were true we would all just forget about using punctuations entirely.
No one learned that newlines terminate a sentence in a horizontal language like English, instead we learned to infer when a sentence ended regardless of whether or not the punctuation was present or not.
The same is true for programming languages, especially for those of us who started out with a programming language that did use explicit block termination. You learn to infer where a block starts and stops over time, it does not mean that the spacing did that for you, the semantics of the language itself did.
Most literate people would have no problem understanding posts without punctuations. Having to rely on what is a representation of the absence of a character is not a good idea. Do any of you count from zero when you make your to-do list?
Alright, this is a very narrow perspective, but I haven't seen it mentioned elsewhere: keeping track of white space is a hassle if you are trying to autogenerate a script.
When I first encountered Python, I don't remember the details, but I had developed a Windows tool with a GUI that allowed novice users to configure several settings, and then press OK. The output of the tool was a script, which the user could copy to a Unix machine, and then execute it there to do something or other that was too complicated or tedious for them to do manually. Since nobody maintained the generated scripts, there was no reason they needed to look nice. So, keeping track of indentation seemed like an unnecessary burden from that perspective.
For most purposes, though, I find that Python is much easier than any other language.
Perhaps your C++ background (and thus who your peers are) is clouding your perception of this (ie selective sampling) but in my experience the reaction to Python's "white space is intent" meme is anywhere from ambivalent to they absolutely love it. The reason a lot of people love it is that it forces people to format their code.
I can't say I've ever met anyone who "hates" it because hating it is much like hating the idea of well-formatted code.
Edit: let me put this in some perspective.
In the Java world there are two main methods of packaging and deploying Web apps: Ant and Maven.
Ant is basically an XML-based Make facility that has tasks for the common things you do. It's a blank slate, which is powerful, but it also means you have to write a lot of common things yourself and every installation is free to do things slightly differently. All of this is well-intentioned but can make it hard to figure out someone's Ant scripts.
Maven is far more fully features. It has archetypes, which are basically project types. Depending on which archetype(s) you use, you won't have to write any tasks to start, stop, clean, build, etc but you will have a mandated directory structure, which is quite deep.
The advantage of that is if you've seen one Maven Web app you've seen them all. You know the commands. You know the structure. That's extremely useful.
But you have people who absolutely hate Maven and I think it comes down to this: they don't like giving up control, even when it's ultimately in their interest to do so. Also, you'll find a certain brand of person who thinks that their use case is a justifiable exception. You see this personality trait a lot. For example, I think an old Joel post mentioned a story where someone wanted to use "enter" to go from the username to password form fields even though the convention was that enter executed the default action (usually "OK") so they had to write a custom dialog class for Windows for this.
Basically some people just don't like being told what to do and others are completely obstinate in their belief that they're right even when all evidence points to the contrary.
This probably explains why some supposedly hate Python's white space: they don't like being told how to format their code. They like the freedom of C/C++.
Because change is scary. And maybe, among certain developers, there are some faint memories of languages with capricious rules about whitespacing that were hard to remember and arbitrary, meant more for compiler convenience than expressiveness.
Most likely, not giving whitespace-significance a fair shake before dismissing it is the real reason. Ask someone to fix a bug in a reasonably complex but well-written Python program, then ask them to go fix a bug in a 20 year old system in C, VB or Cobol and ask them which they prefer.
As for me, I have as much trouble with whitespace in Python or Boo as I have with parentheses in Lisp. Which is to say, none.
They will have to get used to it. Initially I had a problem my self trying to read some examples but after using language for some time I started liking it.
I believe it is a habit that people has to overcome.
Some have developed habits (for example: deeply nested loops, unnecessarily large functions) that they perceive would be hard to support in a whitespace sensitive language.
Some have developed an aesthetic dislike for hanging indents.
Because they are used to languages like C and JavaScript where they can align items as they please.
When it comes to Python, you have to indent code based on its context:
def Print():
ManyArgumentFunction(LongParam1,LongParam2,LongParam3,LongParam4...
In C, you could do:
void Print()
{
ManyArgumentFunction(LongParam1,
LongParam2,
LongParam3,...
}
The only complaints I (also of C++ background) have heard about Python are from people who don't like using the "Replace Tabs with Space" option in their IDE.

Where are programming tasks in scrum detailed at? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When you have sprint task in Scrum, where do you put how you want to program something? For example, say I am making a tetris game and I want to build the part of the game that tracks the current score and a high score table. I have my feature, my user story and my task, but now I want to talk about how to design it.
Is that design something that is recorded on the sprint somewhere as to how to do that or is that just somethign the programmer figures out. Do you put do task x use database such and such, create these columns, etc.? If not, do you record that at all? Is that what trac is for? I don't mean too high level design.
I touched on it here: Where in the scrum process is programming architecture discussed?
but my current question is later in the project after the infrastructure. I'm speaking more about the middle now. The actual typing in the code. Some said they decide along the way, some team-leads. Is this is even documented anywhere except in the code itself with docs and comments?
edit: does your boss just say, okay, you do this part, I don't care how?
Thank you.
There can be architectural requirements in addition to user-specified requirements that can muddy this a bit. Thus, one could have a, "You will use MVP on this," that does limit the design a bit.
In my current project, aside from requirements from outside the team, the programmer just figures it out is our standard operating procedure. This can mean crazy things can be done and re-worked later on as not everyone will code something so that the rest of the team can easily use it and change it.
Code, comments and docs cover 99% of where coding details would be found. What's left, if one assumes that wikis are part of docs?
Scrum says absolutely nothing about programming tasks. Up to you to work that out...
Scrum doesn't necessarily have anything explicitly to do with programming - you can use it to organise magazine publication, church administration, museum exhibitions... it's a management technique not explicitly a way of managing software development.
If you do extreme programming inside scrum, you just break your user stories for the iteration down into task cards, pair up and do them.
When I submit tasks to my programming team, the description usually takes the shape of a demo, a description on how the feature is shown in order to be reviewed.
How the task will be implemented is decided when we evaluate the task. The team members split the task in smaller items. If a design is necessary, the team will have to discuss it before being able to split it. If the design is too complex to be done inside this meeting, we will simply create a design task, agile/scrum doesn't force how this should be done (in a wiki, in a doc, in your mind, on a napkin, your choice) aside for saying as little documentation as possible. In most case the design is decided on a spot, after a bit of debate, and the resulting smaller tasks are the description of how things will be done.
Also, sometimes the person doing it will make discoveries along the way that change the design and so, the way to work on it. We may then thrash some cards, make new ones. The key is to be flexible.
You do what you need to do. Avoid designing everything up front, but if there are things you already know will not change, then just capture them. However, corollary to YAGNI is that you don't try to capture too much too soon as the understanding of what is needed will likely change before someone gets to do it.
I think your question sounds more like you should be asking who, not when or where. The reason Agile projects succeed is that they understand that people are part of the process. Agile projects that fail seem to tend to favor doing things according to someone's idea of "the book" and not understanding the people and project they have. If you have one senior team lead and a bunch of junior developers, then maybe the senior should spend more of their time on such details (emphasis on maybe). If you have a bunch of seniors, then leaving these to the individual may be a better idea. I assume you don't have any cross-team considerations. If you do, then hashing out some of the details like DB schema might need to come early if multiple teams depend on it.
If you (as team member) feels the need to talk about design, to so some design brainstorming with other team members, then just do it. About the how, many teams will just use a whiteboard and brain juice for this and keep things lightweight which is a good practice IMHO.
Personally, I don't see much value in writing down every decision and detail in a formalized document, at least not in early project phases. Written documents are very hard to maintain and get deprecated pretty fast. So I tend to prefer face to face communication. Actually, written documents should only be created if they're really going to be used, and in a very short term. This can sound obvious but I've seen several projects very proud of their (obsolete) documentation but without any line of code. That's just ridiculous. In other words, write extensive documentation as late as possible, and only if someone value it (e.g. the product owner).

Resources