How does JRebel work? - jrebel

Does JRebel use Javassist or some kind of bytecode manipulation? I'm asking this out of pure interest, I don't actually "need" to know :)

JRebel uses class rewriting (both ASM and Javassist) and JVM integration to version individual classes. Plus it integrates with app servers to redirect class/resource and web server lookups back to the workspace. And it also integrates with most app servers and frameworks to propagate changes to the configuration (metadata or files). That's the short of it. The long of it takes 10 world-class engineers to develop and support and is our commercial secret :)

This is the closest reasoning I have read on how JRebel works by Simon, ZT Technical Evangelist.
Pasting the contents here:
Jrebel instruments the application and JVM classes to create a layer of indirection. In the case an application class is loaded, all method bodies will have a redirection using the runtime redirection service, as shown in Figure 2. This service manages and loads the class and method versions using anonymous inner classes created for each version that is reloaded. Let’s look at an example. We’ll create a new class C with two methods:
public class C extends X {
int y = 5;
int method1(int x) {
return x + y;
}
void method2(String s) {
System.out.println(s);
}
}
When Class C is loaded for the first time, JRebel instruments the class. The signature of this class will be the same, but the method bodies are now being redirected. The loaded class will now look something like this:
public class C extends X {
int y = 5;
int method1(int x) {
Object[] o = new Object[1];
o[0] = x;
return Runtime.redirect(this, o, "C", "method1", "(I)I");
}
void method2(String s) {
Object[] o = new Object[1];
o[0] = s;
return Runtime.redirect(this, o, "C", "method2", "(Ljava/lang/String;)V");
}
}
To the redirect calls, we passing in the calling object, the parameters to the method that has been called, our class name, our method name and the types of the parameters and return. JRebel also loads a class with the implementations at a specific version, initially version 0. Let’s see what that looks like:
public abstract class C0 {
public static int method1(C c, int x) {
int tmp1 = Runtime.getFieldValue(c, "C", "y", "I");
return x + tmp1;
}
public static void method2(C c, String s) {
PrintStream tmp1 =
Runtime.getFieldValue(
null, "java/lang/System", "out", "Ljava/io/PrintStream;");
Object[] o = new Object[1];
o[0] = s;
Runtime.redirect(tmp1, o, "java/io/PrintStream;", "println","(Ljava/lang/String;)V");
}
}
Let’s now say the user changes their class C by adding a new method z() and invoking it from method1. Class C now looks like this:
public class C {
int y = 5;
int z() {
return 10;
}
int method1(int x) {
return x + y + z();
}
...
}
The next time the runtimes uses this class, JRebel detects there is a newer version that has been compiled and on the filesystem, so it loads the new version, C1. This version has the additional method z and the updated implementation for method1.
public class C1 {
public static int z(C c) {
return 10;
}
public static int method1(C c, int x) {
int tmp1 = Runtime.getFieldValue(c, "C", "y", "I");
int tmp2 = Runtime.redirect(c, null, "C", "z", "(V)I");
return x + tmp1 + tmp2;
}
...
}
The Runtime.redirect call will always be routed to the latest version of the class C, so calling new C().method1(10) would return 15 before the code change and 25 afterwards. This implementation misses a lot of detail and optimizations, but you get the idea.
Source: http://zeroturnaround.com/rebellabs/why-hotswap-wasnt-good-enough-in-2001-and-still-isnt-today/

Great article on this topic by Dave Booth. Reloading Java Classes: HotSwap and JRebel — Behind the Scenes.

Related

I'm doing an assignment for my class where I need to create the functions below for a Path. I'm having trouble on the functions, am I overthinking it?

import java.awt.Point;
import java.util.ArrayList;
import java.util.Scanner;
public class Path
{
ArrayList<Point> pointOne;
ArrayList<Point> pointTwo;
public Path() {
pointOne = new ArrayList<Point>();
pointTwo = new ArrayList<Point>();
}
public Path(Scanner s)
{
pointOne = new ArrayList<Point>();
pointTwo = new ArrayList<Point>();
}
public int getPointCount()
{
return 0;
}
public int getX(int n)
{
return n;
}
public int getY(int n)
{
return n;
}
public void add(int x, int y)
{
}
public String toString()
{
}
So far this is what I have for the getX and getY function, please feel free to correct me if im not doing something right.
I've tried to research some different ways to add two points to an arraylist. I found one here but it didn't help as much as I thought it was going to. I am also confused on how to use the scanner to scan in the points and build a path. Am I just being dumb and overthinking this? I'm going to talk to my teacher to see if he can clear anything up but any help would be much appreciated thanks
public class Main {
public static void main {
ArrayList<Point> path = new ArrayList<Point>();
path.add(new Point(x1,y1));
path.add(new Point(x2,y2));
}
}
public class Point {
private double X;
private double Y;
public Point(double x, double y){
X = x;
Y = y;
}
public double getX(){return X;}
public double getY(){return Y;}
}
You see, a path is a list of points, conceptually, specifically an ArrayList of Points here. Point is a type of object with attributes x and y. You don't need to create an type called Path because a Path is just an ArrayList<Point>. You can't extend ArrayLists, probably you don't know yet about how inheritance anyways. But unless you are told specifically to do so, there's no need to make Path a class.
Furthermore, Point could be removed as a type and just made as an ArrayList<Double>. Conceptually, a point is just a list of its numeric coordinates in each direction.
Therefore, a Path could be represented as an ArrayList<ArrayList<Double>>.

Merging entries of 2 String lists in Java 8 [duplicate]

In JDK 8 with lambda b93 there was a class java.util.stream.Streams.zip in b93 which could be used to zip streams (this is illustrated in the tutorial Exploring Java8 Lambdas. Part 1 by Dhananjay Nene). This function :
Creates a lazy and sequential combined Stream whose elements are the
result of combining the elements of two streams.
However in b98 this has disappeared. Infact the Streams class is not even accessible in java.util.stream in b98.
Has this functionality been moved, and if so how do I zip streams concisely using b98?
The application I have in mind is in this java implementation of Shen, where I replaced the zip functionality in the
static <T> boolean every(Collection<T> c1, Collection<T> c2, BiPredicate<T, T> pred)
static <T> T find(Collection<T> c1, Collection<T> c2, BiPredicate<T, T> pred)
functions with rather verbose code (which doesn't use functionality from b98).
I needed this as well so I just took the source code from b93 and put it in a "util" class. I had to modify it slightly to work with the current API.
For reference here's the working code (take it at your own risk...):
public static<A, B, C> Stream<C> zip(Stream<? extends A> a,
Stream<? extends B> b,
BiFunction<? super A, ? super B, ? extends C> zipper) {
Objects.requireNonNull(zipper);
Spliterator<? extends A> aSpliterator = Objects.requireNonNull(a).spliterator();
Spliterator<? extends B> bSpliterator = Objects.requireNonNull(b).spliterator();
// Zipping looses DISTINCT and SORTED characteristics
int characteristics = aSpliterator.characteristics() & bSpliterator.characteristics() &
~(Spliterator.DISTINCT | Spliterator.SORTED);
long zipSize = ((characteristics & Spliterator.SIZED) != 0)
? Math.min(aSpliterator.getExactSizeIfKnown(), bSpliterator.getExactSizeIfKnown())
: -1;
Iterator<A> aIterator = Spliterators.iterator(aSpliterator);
Iterator<B> bIterator = Spliterators.iterator(bSpliterator);
Iterator<C> cIterator = new Iterator<C>() {
#Override
public boolean hasNext() {
return aIterator.hasNext() && bIterator.hasNext();
}
#Override
public C next() {
return zipper.apply(aIterator.next(), bIterator.next());
}
};
Spliterator<C> split = Spliterators.spliterator(cIterator, zipSize, characteristics);
return (a.isParallel() || b.isParallel())
? StreamSupport.stream(split, true)
: StreamSupport.stream(split, false);
}
zip is one of the functions provided by the protonpack library.
Stream<String> streamA = Stream.of("A", "B", "C");
Stream<String> streamB = Stream.of("Apple", "Banana", "Carrot", "Doughnut");
List<String> zipped = StreamUtils.zip(streamA,
streamB,
(a, b) -> a + " is for " + b)
.collect(Collectors.toList());
assertThat(zipped,
contains("A is for Apple", "B is for Banana", "C is for Carrot"));
If you have Guava in your project, you can use the Streams.zip method (was added in Guava 21):
Returns a stream in which each element is the result of passing the corresponding element of each of streamA and streamB to function. The resulting stream will only be as long as the shorter of the two input streams; if one stream is longer, its extra elements will be ignored. The resulting stream is not efficiently splittable. This may harm parallel performance.
public class Streams {
...
public static <A, B, R> Stream<R> zip(Stream<A> streamA,
Stream<B> streamB, BiFunction<? super A, ? super B, R> function) {
...
}
}
Zipping two streams using JDK8 with lambda (gist).
public static <A, B, C> Stream<C> zip(Stream<A> streamA, Stream<B> streamB, BiFunction<A, B, C> zipper) {
final Iterator<A> iteratorA = streamA.iterator();
final Iterator<B> iteratorB = streamB.iterator();
final Iterator<C> iteratorC = new Iterator<C>() {
#Override
public boolean hasNext() {
return iteratorA.hasNext() && iteratorB.hasNext();
}
#Override
public C next() {
return zipper.apply(iteratorA.next(), iteratorB.next());
}
};
final boolean parallel = streamA.isParallel() || streamB.isParallel();
return iteratorToFiniteStream(iteratorC, parallel);
}
public static <T> Stream<T> iteratorToFiniteStream(Iterator<T> iterator, boolean parallel) {
final Iterable<T> iterable = () -> iterator;
return StreamSupport.stream(iterable.spliterator(), parallel);
}
Since I can't conceive any use of zipping on collections other than indexed ones (Lists) and I am a big fan of simplicity, this would be my solution:
<A,B,C> Stream<C> zipped(List<A> lista, List<B> listb, BiFunction<A,B,C> zipper){
int shortestLength = Math.min(lista.size(),listb.size());
return IntStream.range(0,shortestLength).mapToObj( i -> {
return zipper.apply(lista.get(i), listb.get(i));
});
}
The methods of the class you mentioned have been moved to the Stream interface itself in favor to the default methods. But it seems that the zip method has been removed. Maybe because it is not clear what the default behavior for different sized streams should be. But implementing the desired behavior is straight-forward:
static <T> boolean every(
Collection<T> c1, Collection<T> c2, BiPredicate<T, T> pred) {
Iterator<T> it=c2.iterator();
return c1.stream().allMatch(x->!it.hasNext()||pred.test(x, it.next()));
}
static <T> T find(Collection<T> c1, Collection<T> c2, BiPredicate<T, T> pred) {
Iterator<T> it=c2.iterator();
return c1.stream().filter(x->it.hasNext()&&pred.test(x, it.next()))
.findFirst().orElse(null);
}
I humbly suggest this implementation. The resulting stream is truncated to the shorter of the two input streams.
public static <L, R, T> Stream<T> zip(Stream<L> leftStream, Stream<R> rightStream, BiFunction<L, R, T> combiner) {
Spliterator<L> lefts = leftStream.spliterator();
Spliterator<R> rights = rightStream.spliterator();
return StreamSupport.stream(new AbstractSpliterator<T>(Long.min(lefts.estimateSize(), rights.estimateSize()), lefts.characteristics() & rights.characteristics()) {
#Override
public boolean tryAdvance(Consumer<? super T> action) {
return lefts.tryAdvance(left->rights.tryAdvance(right->action.accept(combiner.apply(left, right))));
}
}, leftStream.isParallel() || rightStream.isParallel());
}
Using the latest Guava library (for the Streams class) you should be able to do
final Map<String, String> result =
Streams.zip(
collection1.stream(),
collection2.stream(),
AbstractMap.SimpleEntry::new)
.collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue()));
The Lazy-Seq library provides zip functionality.
https://github.com/nurkiewicz/LazySeq
This library is heavily inspired by scala.collection.immutable.Stream and aims to provide immutable, thread-safe and easy to use lazy sequence implementation, possibly infinite.
Would this work for you? It's a short function, which lazily evaluates over the streams it's zipping, so you can supply it with infinite streams (it doesn't need to take the size of the streams being zipped).
If the streams are finite it stops as soon as one of the streams runs out of elements.
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.stream.Stream;
class StreamUtils {
static <ARG1, ARG2, RESULT> Stream<RESULT> zip(
Stream<ARG1> s1,
Stream<ARG2> s2,
BiFunction<ARG1, ARG2, RESULT> combiner) {
final var i2 = s2.iterator();
return s1.map(x1 -> i2.hasNext() ? combiner.apply(x1, i2.next()) : null)
.takeWhile(Objects::nonNull);
}
}
Here is some unit test code (much longer than the code itself!)
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.BiFunction;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.junit.jupiter.api.Assertions.assertEquals;
class StreamUtilsTest {
#ParameterizedTest
#MethodSource("shouldZipTestCases")
<ARG1, ARG2, RESULT>
void shouldZip(
String testName,
Stream<ARG1> s1,
Stream<ARG2> s2,
BiFunction<ARG1, ARG2, RESULT> combiner,
Stream<RESULT> expected) {
var actual = StreamUtils.zip(s1, s2, combiner);
assertEquals(
expected.collect(Collectors.toList()),
actual.collect(Collectors.toList()),
testName);
}
private static Stream<Arguments> shouldZipTestCases() {
return Stream.of(
Arguments.of(
"Two empty streams",
Stream.empty(),
Stream.empty(),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.empty()),
Arguments.of(
"One singleton and one empty stream",
Stream.of(1),
Stream.empty(),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.empty()),
Arguments.of(
"One empty and one singleton stream",
Stream.empty(),
Stream.of(1),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.empty()),
Arguments.of(
"Two singleton streams",
Stream.of("blah"),
Stream.of(1),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.of(pair("blah", 1))),
Arguments.of(
"One singleton, one multiple stream",
Stream.of("blob"),
Stream.of(2, 3),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.of(pair("blob", 2))),
Arguments.of(
"One multiple, one singleton stream",
Stream.of("foo", "bar"),
Stream.of(4),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.of(pair("foo", 4))),
Arguments.of(
"Two multiple streams",
Stream.of("nine", "eleven"),
Stream.of(10, 12),
(BiFunction<Object, Object, Object>) StreamUtilsTest::combine,
Stream.of(pair("nine", 10), pair("eleven", 12)))
);
}
private static List<Object> pair(Object o1, Object o2) {
return List.of(o1, o2);
}
static private <T1, T2> List<Object> combine(T1 o1, T2 o2) {
return List.of(o1, o2);
}
#Test
void shouldLazilyEvaluateInZip() {
final var a = new AtomicInteger();
final var b = new AtomicInteger();
final var zipped = StreamUtils.zip(
Stream.generate(a::incrementAndGet),
Stream.generate(b::decrementAndGet),
(xa, xb) -> xb + 3 * xa);
assertEquals(0, a.get(), "Should not have evaluated a at start");
assertEquals(0, b.get(), "Should not have evaluated b at start");
final var takeTwo = zipped.limit(2);
assertEquals(0, a.get(), "Should not have evaluated a at take");
assertEquals(0, b.get(), "Should not have evaluated b at take");
final var list = takeTwo.collect(Collectors.toList());
assertEquals(2, a.get(), "Should have evaluated a after collect");
assertEquals(-2, b.get(), "Should have evaluated b after collect");
assertEquals(List.of(2, 4), list);
}
}
public class Tuple<S,T> {
private final S object1;
private final T object2;
public Tuple(S object1, T object2) {
this.object1 = object1;
this.object2 = object2;
}
public S getObject1() {
return object1;
}
public T getObject2() {
return object2;
}
}
public class StreamUtils {
private StreamUtils() {
}
public static <T> Stream<Tuple<Integer,T>> zipWithIndex(Stream<T> stream) {
Stream<Integer> integerStream = IntStream.range(0, Integer.MAX_VALUE).boxed();
Iterator<Integer> integerIterator = integerStream.iterator();
return stream.map(x -> new Tuple<>(integerIterator.next(), x));
}
}
AOL's cyclops-react, to which I contribute, also provides zipping functionality, both via an extended Stream implementation, that also implements the reactive-streams interface ReactiveSeq, and via StreamUtils that offers much of the same functionality via static methods to standard Java Streams.
List<Tuple2<Integer,Integer>> list = ReactiveSeq.of(1,2,3,4,5,6)
.zip(Stream.of(100,200,300,400));
List<Tuple2<Integer,Integer>> list = StreamUtils.zip(Stream.of(1,2,3,4,5,6),
Stream.of(100,200,300,400));
It also offers more generalized Applicative based zipping. E.g.
ReactiveSeq.of("a","b","c")
.ap3(this::concat)
.ap(of("1","2","3"))
.ap(of(".","?","!"))
.toList();
//List("a1.","b2?","c3!");
private String concat(String a, String b, String c){
return a+b+c;
}
And even the ability to pair every item in one stream with every item in another
ReactiveSeq.of("a","b","c")
.forEach2(str->Stream.of(str+"!","2"), a->b->a+"_"+b);
//ReactiveSeq("a_a!","a_2","b_b!","b_2","c_c!","c2")
If anyone needs this yet, there is StreamEx.zipWith function in streamex library:
StreamEx<String> givenNames = StreamEx.of("Leo", "Fyodor")
StreamEx<String> familyNames = StreamEx.of("Tolstoy", "Dostoevsky")
StreamEx<String> fullNames = givenNames.zipWith(familyNames, (gn, fn) -> gn + " " + fn);
fullNames.forEach(System.out::println); // prints: "Leo Tolstoy\nFyodor Dostoevsky\n"
This is great. I had to zip two streams into a Map with one stream being the key and other being the value
Stream<String> streamA = Stream.of("A", "B", "C");
Stream<String> streamB = Stream.of("Apple", "Banana", "Carrot", "Doughnut");
final Stream<Map.Entry<String, String>> s = StreamUtils.zip(streamA,
streamB,
(a, b) -> {
final Map.Entry<String, String> entry = new AbstractMap.SimpleEntry<String, String>(a, b);
return entry;
});
System.out.println(s.collect(Collectors.toMap(e -> e.getKey(), e -> e.getValue())));
Output:
{A=Apple, B=Banana, C=Carrot}

What does it mean: plug-in's Execute method should be written to be stateless?

In the "Write a plug-in" MSDN Documentation it says:
The plug-in's Execute method should be written to be stateless because the constructor is not called for every invocation of the plug-in. Also, multiple system threads could execute the plug-in at the same time.
I am wondering what does it mean exactly to have the Execute method stateless?
A stateless method is a method that does not affect, nor depend on, the global state, or the state of its' defining object, when executed.
In your case, it must:
not depend on the state of the plugin object when executed
not change the state of the plugin while executing
Here's an example where a method is not stateless;
class StatefulSum
{
private int a;
private int b;
public void SetA(int value) {
a = value;
}
public void SetB(int value) {
b = value;
}
public int ComputeSum() {
return a + b;
}
}
And this is a more subtle example of a method that is not stateless:
class SubtleStatefulSum
{
private int partialSum;
// Looks like it's stateless but it's not and in a
// concurrent environment this method is a recipe for disaster
public int ComputeSum(int a, int b)
{
partialSum = 0;
partialSum = partialSum + a;
partialSum = partialSum + b;
return partialSum;
}
}
This is a basic example of a method that is stateless.
class BasicStateless
{
public int ComputeSum(int a, int b)
{
return a + b;
}
}
Of course, the parameters of the computation could be obtained at run time using a more complex mechanism such as the case of the Dynamics CRM plugin, via the IServiceProvider parameter.
You could conceivably have a stateless method like this:
class Stateless2
{
public int ComputeSum(IServiceProvider provider)
{
var numService = (INumberService)provider.GetService(typeof(INumbersService));
int a = numService.GetNumberA();
int b = numService.GetNumberB();
return a + b;
}
}
Where the IServiceProvider instance knows how to retrieve an object that implements an INumberService interface which in turn know how to retrieve the numbers A and B. This is a combination of Dependency Injection and Inversion of Control.

Groovy: Is there a way to implement multiple inheritance while using type-checking?

#groovy.transform.TypeChecked
abstract class Entity {
...
double getMass() {
...
}
...
}
#groovy.transform.TypeChecked
abstract class Location {
...
Entity[] getContent() {
...
}
...
}
#groovy.transform.TypeChecked
abstract class Container {...} //inherits, somehow, from both Location and Entity
#groovy.transform.TypeChecked
class Main {
void main() {
double x
Container c = new Chest() //Chest extends Container
Entity e = c
x = e.mass
Location l = c
x = l.content //Programmer error, should throw compile-time error
}
}
Essentially, is there a way to achieve this, without sacrificing any of the three properties outlines in main():
Direct access to fields, even virtual fields
Assigning to both super-classes
Typechecking (at compile-time)
I don't think you can do that with classes. Maybe you'd wanted traits (under discussion update: available in Groovy 2.3 and already rocking!) or, for a pure dynamic solution, #Mixin, which you'd back up with a good test suite.
My guess: #Delegate is your best friend here, but, as it stands, you can only store a Chest object in a Container type variable. So you'd need some interfaces.
Even if the superclass is not under your control, you can use groovy as operator to make it implement an interface.
First, i rewrote your classes to remove the abstract and add interfaces:
import groovy.transform.TypeChecked as TC
interface HasMass { double mass }
interface HasContent { Entity[] getContent() }
#TC class Entity implements HasMass { double mass }
#TC class Location {
Entity[] getContent() {
[new Entity(mass: 10.0), new Entity(mass: 20.0)] as Entity[]
}
}
Note i didn't added HasContent to Location, to show the usage of as.
Second, comes the Container and Chest. #Delegate is added and it auto-inherits the interfaces of the delegates:
#TC
abstract class Container {
#Delegate Location location = new Location()
#Delegate Entity entity = new Entity()
}
#TC class Chest extends Container { }
Last, it becomes type-checkable, as long as you stick to interfaces:
#TC class Mult {
static main(args) {
def x // use 'def' for flow-typing
Container c = new Chest() //Chest extends Container
HasMass e = c
x = e.mass
def l = c as HasContent
x = l.content //Programmer error, should throw compile-time error
assert c.content.collect { Entity it -> it.mass } == [10.0, 20.0]
}
}

Can extension methods modify extended class values?

I was just trying to code the following extension method:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace _4Testing
{
static class ExtensionMethods
{
public static void AssignMe(this int me, int value)
{
me = value;
}
}
}
But it is not working, i mean, can I use an extension method to alter values from extended classes? I don't want to change void return type to int, just changing extended class value. Thanks in advance
Your example uses int, which is a value type. Classes are reference types and behaves a bit differently in this case.
While you could make a method that takes another reference like AssignMe(this MyClass me, MyClass other), the method would work on a copy of the reference, so if you assign other to me it would only affect the local copy of the reference.
Also, keep in mind that extension methods are just static methods in disguise. I.e. they can only access public members of the extended types.
public sealed class Foo {
public int PublicValue;
private int PrivateValue;
}
public static class FooExtensions {
public static void Bar(this Foo f) {
f.PublicValue = 42;
// Doesn't compile as the extension method doesn't have access to Foo's internals
f.PrivateValue = 42;
}
}
// a work around for extension to a wrapping reference type is following ....
using System;
static class Program
{
static void Main(string[] args)
{
var me = new Integer { value = 5 };
int y = 2;
me.AssignMe(y);
Console.WriteLine(me); // prints 2
Console.ReadLine();
}
public static void AssignMe(this Integer me, int value)
{
me.value = value;
}
}
class Integer
{
public int value { get; set; }
public Integer()
{
value = 0;
}
public override string ToString()
{
return value.ToString();
}
}
Ramon what you really need is a ref modifier on the first (i.e. int me ) parameter of the extension method, but C# does not allow ref modifier on parameters having 'this' modifiers.
[Update]
No workaround should be possible for your particular case of an extension method for a value type. Here is the "reductio ad absurdum" that you are asking for if you are allowed to do what you want to do; consider the C# statement:
5.AssignMe(10);
... now what on earth do you think its suppose to do ? Are you trying to assign 10 to 5 ??
Operator overloading cannot help you either.
This is an old post but I ran into a similar problem trying to implement an extender for the String class.
My original code was this:
public static void Revert(this string s)
{
char[] xc = s.ToCharArray();
s = new string(xc.Reverse());
}
By using the new keyword I am creating a new object and since s is not passed by reference it will not be modified.
I changed it to the following which provides a solution to Ramon's problem:
public static string Reverse(this string s)
{
char[] xc = s.ToCharArray();
Array.Reverse(xc);
return new string(xc);
}
In which case the calling code will be:
s = s.Reverse();
To manipulate integers you can do something like:
public static int Increment(this int i)
{
return i++;
}
i = i.Increment();

Resources