Why is the usage of util.inherits() discouraged? - node.js

According to the Node.js documentation :
Note: usage of util.inherits() is discouraged. Please use the ES6 class and extends keywords to get language level inheritance support. Also note that the two styles are semantically incompatible.
https://nodejs.org/api/util.html#util_util_inherits_constructor_superconstructor

The reason why util.inherits is discouraged, is because changing the prototype of an object should be avoided, as most JavaScript engines look for optimisations assuming that the prototype will not change. When it does, this may lead to bad performance.
util.inherits relies on Object.setPrototypeOf to make this change, and the MDN documentation of that native method has this warning:
Warning: Changing the [[Prototype]] of an object is, by the nature of how modern JavaScript engines optimize property accesses, currently a very slow operation in every browser and JavaScript engine. In addition, the effects of altering inheritance are subtle and far-flung, and are not limited to the time spent in the Object.setPrototypeOf(...) statement, but may extend to any code that has access to any object whose [[Prototype]] has been altered.
Because this feature is a part of the language, it is still the burden on engine developers to implement that feature performantly (ideally). Until engine developers address this issue, if you are concerned about performance, you should avoid setting the [[Prototype]] of an object. Instead, create a new object with the desired [[Prototype]] using Object.create().

As the quote says, you should use the ES6 class and extends keywords to get language level inheritance support instead of utils.inherits and that's exactly the reason for which to use it is discouraged: there exist better alternatives that are part of the core language, that's all.
util.inherits comes from the time when those utils were not part of the language and it requires you a lot of boilerplate to define your own inheritance tools.
Nowadays the language offers a valid alternative and it doesn't make sense anymore to use the ones provided with the library itself. Of course, this is true as long as you use plan to use ES6 - otherwise ignore that note and continue to use utils.inherits.
To reply to your comment:
How is util.inherits() more complicated?
It's not a matter of being more or less complicated. To use a core language feature should be ever your preferred way over using a library specific alternative for obvious reasons.

util.inherits() got deprecated in the new version of node so need to use the ES6 class and extends keywords to get language level inheritance support instead of utils.inherits.
below example which I gave below helps you to understand more clearly :
"use strict";
class Person {
constructor(fName, lName) {
this.firstName = fName;
this.lastName = lName;
}
greet() {
console.log("in a class fn..", this.firstName, "+ ", this.lastName);
}
}
class PoliceMan extends Person {
constructor(burgler) {
super("basava", "sk");
this.burgler = burgler;
}
}
let policeObj = new PoliceMan();
policeObj.greet();
Output : in a class fn.. basava + sk
Here we can see Person class is inherited by PoliceMan class, so that PoliceMan obj can access the properties of Person class by calling super(); in a constructor
Hope this will work as util.inherits();
Happy Coding !!!

Related

How to provide and consume require.js modules in scala.js (and extending classes)

I'm doing this Ensime package for Atom.io https://github.com/ensime/ensime-atom and I've been thinking about the possibility to use scala.js instead of writing Coffeescript.
Atom is a web based editor which is scripted with js and is node.js based. A plugin/package defines it's main entry point by pointing out a javascript object with a few specific.
I figured I should start out simple and try using scala.js replacing the simplest coffeescript file I have:
{View} = require 'atom-space-pen-views'
# View for the little status messages down there where messages from Ensime server can be shown
module.exports =
class StatusbarView extends View
#content: ->
#div class: 'ensime-status inline-block'
initialize: ->
serialize: ->
init: ->
#attach()
attach: =>
statusbar = document.querySelector('status-bar')
statusbar?.addLeftTile {item: this}
setText: (text) =>
#text("Ensime: #{text}").show()
destroy: ->
#detach()
As you can see this exports a require.js module and is a class extending a class fetched with require as well.
Sooo.
I'm thinking I'd just use Dynamic for the require dep as I've seen on SO How to invoke nodejs modules from scala.js?:
import js.Dynamic.{global => g}
import js.DynamicImplicits._
private[views] object SpacePen {
private val spacePenViews = require("atom-space-pen-views")
val view = spacePenViews.view
}
But if I wanted to type the super-class, could I just make a facade-trait and do asInstanceOf?
Secondly, I wonder how I can export my class as a node module. I found this:
https://github.com/rockymadden/scala-node/blob/master/main/src/main/coffeescript/example.coffee
Is this the right way? Do I need to do the sandboxing? Couldn't I just get moduleimported from global and write module.exports = _some_scala_object_?
I'm also wondering how I could extend existing js classes. The same problem as asked here, but I don't really understand the answer:
https://groups.google.com/forum/#!topic/scala-js/l0gSOSiqubs
My code so far:
private[views] object SpacePen {
private val spacePenViews = js.Dynamic.global.require("atom-space-pen-views")
type View = spacePenViews.view
}
class StatusBarView extends SpacePen.View {
override def content =
super.div()
}
gives me compile errors that I can't extend sealed trait Dynamic. Of course.
Any pointers highly appreciated!
I'm not particularly expert in Node per se, but to answer your first question, yes -- if you have a pointer to a JS object, and you know the details of its type, you can pretty much always define a facade trait and asInstanceOf to use it. That ought to work.
As for the last bit, you basically can't extend JS classes in Scala.js -- it just doesn't work. The way most of us get around that is by defining implicit classes, or using implicit def's, to get the appearance of extending without actually doing so.
For example, given JS class Foo, I can write
implicit class RichFoo(foo:Foo) {
def method1() = { ... }
}
This is actually a wrapper around Foo, but calling code can simply call foo.method1() without worrying about that detail.
You can see this approach in action very heavily in jquery-facade, particularly in the relationship between JQuery (the pure facade), JQueryTyped (some tweaked methods over JQuery to make them work better in Scala), and JQueryExtensions (some higher-level functions built around JQuery). These are held together using implicit def's in package.scala. As far as calling code is concerned, all of these simply look like methods on JQuery.

Intellij idea gdsl. Add constructor to the class. Documentation for GDSL

I have an annotation which adds some methods and default constructor to annotated class.
I have managed to create a gdsl, to enable autocompletion in idea for methods, but I'm stuck with constructor and documentation is very poor.
Does anyone have any ideas, how to do this?
Maybe I could find a solution, in existing gdsl, but I can't remember any Transformation, related to constructors. Maybe you can remind me of any of them.
def objectContext = context(ctype: "java.lang.Object")
contributor(objectContext) {
if (hasAnnotation("com.xseagullx.SomeAnnotation")) {
// Here I want to add constructor's declaration(with empty arg's)
// …
// And then my methods.
method name: 'someMethod', type: 'void', params: [:]
}
}
EDITED: OK, if it's as #jasp say, and there is no DSL construct for declaring Constructors, I'm still asking for a good documentation sources, other than JB's confluence page. Tutorials and other sources. I'm familiar with embedded dsl's for groovy, grails and gradle.
Need smth. more structured, if it's possible.
All function invocations inside of GroovyDSL are just calls to wrappers around internal IDEA's Program Structure Interface (PCI). However it doesn't cover all of PCI's abilities, including default constructors functionality I believe. One of an evidence for that is singletonTransform.gdsl, which is bundled into IDEA from 9 version and describes #Singleton AST transformation. Here is it's code:
contributor(context()) {
if (classType?.hasAnnotation("groovy.lang.Singleton")) {
property name: "instance",
type: classType?.getQualifiedName() ?: "java.lang.Object",
isStatic: true
}
}
As you can see it doesn't change a constructor and it's visibility, so IDEA will autocomplete this invalid code:
#Singleton class Foo {}
def foo = new Foo()
Futhermore GDSL that describes the semantics of GroovyDSL (which is actually the part of /plugins/groovy/resources/standardDsls/metaDsl.gdsl of IDEA sources) doesn't provide any ability for describing of constructors.
In this case I suggest you use newify transformation which allows you to describe targetClass.name method returning created instance.
I know this is a bit old, but I found myself looking for something similar.
The DSL you are looking for is
method params: [:], constructor: true although I don't understand why you'd need it; if a class doesn't declare any constructors doesn't IDEA always suggest the default one?

Ignore certain TypeScript compile errors?

I am wondering if there is a way to ignore certain TypeScript errors upon compilation?
I basically have the same issues most people with large projects have around using the this keyword, and I don't want to put all my classes methods into the constructor.
So I have got an example like so:
TypeScript Example
Which seems to create perfectly valid JS and allows me to get around the this keyword issue, however as you can see in the example the typescript compiler tells me that I cannot compile that code as the keyword this is not valid within that scope. However I don't see why it is an error as it produces okay code.
So is there a way to tell it to ignore certain errors? I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
== Edit ==
(Do not read unless you care about context of this question and partial rant)
Just to add some context to all this to show that I'm not just some nut-job (I am sure a lot of you will still think I am) and that I have some good reasons why I want to be able to allow these errors to go through.
Here are some previous questions I have made which highlight some major problems (imo) with TypeScript current this implementation.
Using lawnchair with Typescript
Issue with child scoping of this in Typescript
https://typescript.codeplex.com/discussions/429350 (And some comments I make down the bottom)
The underlying problem I have is that I need to guarantee that all logic is within a consistent scope, I need to be able to access things within knockout, jQuery etc and the local instance of a class. I used to do this with the var self = this; within the class declaration in JavaScript and worked great. As mentioned in some of these previous questions I cannot do that now, so the only way I can guarantee the scope is to use lambda methods, and the only way I can define one of these as a method within a class is within the constructor, and this part is HEAVILY down to personal preference, but I find it horrific that people seem to think that using that syntax is classed as a recommended pattern and not just a work around.
I know TypeScript is in alpha phase and a lot will change, and I HOPE so much that we get some nicer way to deal with this but currently I either make everything a huge mess just to get typescript working (and this is within Hundreds of files which I'm migrating over to TypeScript ) or I just make the call that I know better than the compiler in this case (VERY DANGEROUS I KNOW) so I can keep my code nice and hopefully when a better pattern comes out for handling this I can migrate it then.
Also just on a side note I know a lot of people are loving the fact that TypeScript is embracing and trying to stay as close to the new JavaScript features and known syntax as possible which is great, but typescript is NOT the next version of JavaScript so I don't see a problem with adding some syntactic sugar to the language as people who want to use the latest and greatest official JavaScript implementation can still do so.
The author's specific issue with this seems to be solved but the question is posed about ignoring errors, and for those who end up here looking how to ignore errors:
If properly fixing the error or using more decent workarounds like already suggested here are not an option, as of TypeScript 2.6 (released on Oct 31, 2017), now there is a way to ignore all errors from a specific line using // #ts-ignore comments before the target line.
The mendtioned documentation is succinct enough, but to recap:
// #ts-ignore
const s : string = false
disables error reporting for this line.
However, this should only be used as a last resort when fixing the error or using hacks like (x as any) is much more trouble than losing all type checking for a line.
As for specifying certain errors, the current (mid-2018) state is discussed here, in Design Meeting Notes (2/16/2018) and further comments, which is basically
"no conclusion yet"
and strong opposition to introducing this fine tuning.
I think your question as posed is an XY problem. What you're going for is how can I ensure that some of my class methods are guaranteed to have a correct this context?
For that problem, I would propose this solution:
class LambdaMethods {
constructor(private message: string) {
this.DoSomething = this.DoSomething.bind(this);
}
public DoSomething() {
alert(this.message);
}
}
This has several benefits.
First, you're being explicit about what's going on. Most programmers are probably not going to understand the subtle semantics about what the difference between the member and method syntax are in terms of codegen.
Second, it makes it very clear, from looking at the constructor, which methods are going to have a guaranteed this context. Critically, from a performance, perspective, you don't want to write all your methods this way, just the ones that absolutely need it.
Finally, it preserves the OOP semantics of the class. You'll actually be able to use super.DoSomething from a derived class implementation of DoSomething.
I'm sure you're aware of the standard form of defining a function without the arrow notation. There's another TypeScript expression that generates the exact same code but without the compile error:
class LambdaMethods {
private message: string;
public DoSomething: () => void;
constructor(message: string) {
this.message = message;
this.DoSomething = () => { alert(this.message); };
}
}
So why is this legal and the other one isn't? Well according to the spec: an arrow function expression preserves the this of its enclosing context. So it preserves the meaning of this from the scope it was declared. But declaring a function at the class level this doesn't actually have a meaning.
Here's an example that's wrong for the exact same reason that might be more clear:
class LambdaMethods {
private message: string;
constructor(message: string) {
this.message = message;
}
var a = this.message; // can't do this
}
The way that initializer works by being combined with the constructor is an implementation detail that can't be relied upon. It could change.
I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
One of the high-level goals (that I love) in TypeScript is to extend the JavaScript language and work with it, not fight it. How this operates is tricky but worth learning.

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

should it be allowed to change the method signature in a non statically typed language

Hypothetic and academic question.
pseudo-code:
<pre><code>
class Book{
read(theReader)
}
class BookWithMemory extends Book {
read(theReader, aTimestamp = null)
}
</pre></code>
Assuming:
an interface (if supported) would prohibit it
default value for parameters are supported
Notes:
PHP triggers an strict standards error for this.
I'm not surprised that PHP strict mode complains about such an override. It's very easy for a similar situation to arise unintentionally in which part of a class hierarchy was edited to use a new signature and a one or a few classes have fallen out of sync.
To avoid the ambiguity, name the new method something different (for this example, maybe readAt?), and override read to call readAt in the new class. This makes the intent plain to the interpreter as well as anyone reading the code.
The actual behavior in such a case is language-dependent -- more specifically, it depends on how much of the signature makes up the method selector, and how parameters are passed.
If the name alone is the selector (as in PHP or Perl), then it's down to how the language handles mismatched method parameter lists. If default arguments are processed at the call site based on the static type of the receiver instead of at the callee's entry point, when called through a base class reference you'd end up with an undefined argument value instead of your specified default, similarly to what would happen if there was no default specified.
If the number of parameters (with or without their types) are part of the method selector (as in Erlang or E), as is common in dynamic languages that run on JVM or CLR, you have two different methods. Create a new overload taking additional arguments, and override the base method with one that calls the new overload with default argument values.
If I am reading the question correctly, this question seems very language specific (as in it is not applicable to all dynamic languages), as I know you can do this in ruby.
class Book
def read(book)
puts book
end
end
class BookWithMemory < Book
def read(book,aTimeStamp = nil)
super book
puts aTimeStamp
end
end
I am not sure about dynamic languages besides ruby. This seems like a pretty subjective question as well, as at least two languages were designed on either side of the issue (method overloading vs not: ruby vs php).

Resources