All posts by dotte

知名网站架构整理

Here are some of the favorite posts on HighScalability…

from:http://highscalability.com/all-time-favorites/

Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects

Hanu Kommalapati and Tom Christian
This article discusses:

  • SystemDomain, SharedDomain, and DefaultDomain
  • Object layout and other memory specifics
  • Method table layout
  • Method dispatching
This article uses the following technologies:        .NET Framework, C#
          Since the common language runtime (CLR) will be the premiere infrastructure for building applications in Windows® for some time to come, gaining a deep understanding of it will help you build efficient, industrial-strength applications. In this article, we’ll explore CLR internals, including object instance layout, method table layout, method dispatching, interface-based dispatching, and various data structures.
We’ll be using very simple code samples written in C#, so any implicit references to language syntax should default to C#. Some of the data structures and algorithms discussed will change for the Microsoft® .NET Framework 2.0, but the concepts should largely remain the same. We’ll use the Visual Studio® .NET 2003 Debugger and the debugger extension Son of Strike (SOS) to peek into the data structures we discuss in this article. SOS understands CLR internal data structures and dumps out useful information. See the “Son of Strike” sidebar for loading SOS.dll into the Visual Studio .NET 2003 debugger process. Throughout the article, we will describe classes that have corresponding implementations in the Shared Source CLI (SSCLI), which you can download from msdn.microsoft.com/net/sscli. Figure 1 will help you navigate the megabytes of code in the SSCLI while searching for the referenced structures.
Figure 1 SSCLI Reference
Item SSCLI Path
AppDomain sscliclrsrcvmappdomain.hpp
AppDomainStringLiteralMap sscliclrsrcvmstringliteralmap.h
BaseDomain sscliclrsrcvmappdomain.hpp
ClassLoader sscliclrsrcvmclsload.hpp
EEClass sscliclrsrcvmclass.h
FieldDescs sscliclrsrcvmfield.h
GCHeap sscliclrsrcvmgc.h
GlobalStringLiteralMap sscliclrsrcvmstringliteralmap.h
HandleTable sscliclrsrcvmhandletable.h
InterfaceVTableMapMgr sscliclrsrcvmappdomain.hpp
Large Object Heap sscliclrsrcvmgc.h
LayoutKind sscliclrsrcbclsystemruntimeinteropserviceslayoutkind.cs
LoaderHeaps sscliclrsrcincutilcode.h
MethodDescs sscliclrsrcvmmethod.hpp
MethodTables sscliclrsrcvmclass.h
OBJECTREF sscliclrsrcvmtypehandle.h
SecurityContext sscliclrsrcvmsecurity.h
SecurityDescriptor sscliclrsrcvmsecurity.h
SharedDomain sscliclrsrcvmappdomain.hpp
StructLayoutAttribute sscliclrsrcbclsystemruntimeinteropservicesattributes.cs
SyncTableEntry sscliclrsrcvmsyncblk.h
System namespace sscliclrsrcbclsystem
SystemDomain sscliclrsrcvmappdomain.hpp
TypeHandle sscliclrsrcvmtypehandle.h
A word of caution before we start—the information provided in this article is only valid for the .NET Framework 1.1 (it’s also mostly true for Shared Source CLI 1.0, with the most notable exceptions being some interop scenarios) when running on the x86 platform. This information will change for the .NET Framework 2.0, so please do not build software that relies on the constancy of these internal structures.
Domains Created by the CLR Bootstrap
Before the CLR executes the first line of the managed code, it creates three application domains. Two of these are opaque from within the managed code and are not even visible to CLR hosts. They can only be created through the CLR bootstrapping process facilitated by the shim—mscoree.dll and mscorwks.dll (or mscorsvr.dll for multiprocessor systems). As you can see in Figure 2, these are the System Domain and the Shared Domain, which are singletons. The third domain is the Default AppDomain, an instance of the AppDomain class that is the only named domain. For simple CLR hosts such as a console program, the default domain name is composed of the executable image name. Additional domains can be created from within managed code using the AppDomain.CreateDomain method or from unmanaged hosting code using the ICORRuntimeHost interface. Complicated hosts like ASP.NET create multiple domains based on the number of applications in a given Web site.

Figure 2 Domains Created by the CLR Bootstrap
System Domain
The SystemDomain is responsible for creating and initializing the SharedDomain and the default AppDomain. It loads the system library mscorlib.dll into SharedDomain. It also keeps process-wide string literals interned implicitly or explicitly.
String interning is an optimization feature that’s a little bit heavy-handed in the .NET Framework 1.1, as the CLR does not give assemblies the opportunity to opt out of the feature. Nonetheless, it saves memory by having only a single instance of the string for a given literal across all the application domains.
SystemDomain is also responsible for generating process-wide interface IDs, which are used in creating InterfaceVtableMaps in each AppDomain. SystemDomain keeps track of all the domains in the process and implements functionality for loading and unloading the AppDomains.
SharedDomain
All of the domain-neutral code is loaded into SharedDomain. Mscorlib, the system library, is needed by the user code in all the AppDomains. It is automatically loaded into SharedDomain. Fundamental types from the System namespace like Object, ValueType, Array, Enum, String, and Delegate get preloaded into this domain during the CLR bootstrapping process. User code can also be loaded into this domain, using LoaderOptimization attributes specified by the CLR hosting app while calling CorBindToRuntimeEx. Console programs can load code into SharedDomain by annotating the app’s Main method with a System.LoaderOptimizationAttribute. SharedDomain also manages an assembly map indexed by the base address, which acts as a lookup table for managing shared dependencies of assemblies being loaded into DefaultDomain and of other AppDomains created in managed code. DefaultDomain is where non-shared user code is loaded.
DefaultDomain
DefaultDomain is an instance of AppDomain within which application code is typically executed. While some applications require additional AppDomains to be created at runtime (such as apps that have plug-in architectures or apps doing a significant amount of run-time code generation), most applications create one domain during their lifetime. All code that executes in this domain is context-bound at the domain level. If an application has multiple AppDomains, any cross-domain access will occur through .NET Remoting proxies. Additional intra-domain context boundaries can be created using types inherited from System.ContextBoundObject. Each AppDomain has its own SecurityDescriptor, SecurityContext, and DefaultContext, as well as its own loader heaps (High-Frequency Heap, Low-Frequency Heap, and Stub Heap), Handle Tables (Handle Table, Large Object Heap Handle Table), Interface Vtable Map Manager, and Assembly Cache.
LoaderHeaps
LoaderHeaps are meant for loading various runtime CLR artifacts and optimization artifacts that live for the lifetime of the domain. These heaps grow by predictable chunks to minimize fragmentation. LoaderHeaps are different from the garbage collector (GC) Heap (or multiple heaps in case of a symmetric multiprocessor or SMP) in that the GC Heap hosts object instances while LoaderHeaps hold together the type system. Frequently accessed artifacts like MethodTables, MethodDescs, FieldDescs, and Interface Maps get allocated on a HighFrequencyHeap, while less frequently accessed data structures, such as EEClass and ClassLoader and its lookup tables, get allocated on a LowFrequencyHeap. The StubHeap hosts stubs that facilitate code access security (CAS), COM wrapper calls, and P/Invoke.
Having examined the domains and LoaderHeaps at a high level, we’ll now look at the physical details of these in the context of the simple app in Figure 3. We stopped the program execution at “mc.Method1();” and dumped the domain information using the SOS debugger extension command, DumpDomain (see the “Son of Strike” sidebar for SOS loading information).  Here is the edited output:

!DumpDomain
System Domain: 793e9d58, LowFrequencyHeap: 793e9dbc,
HighFrequencyHeap: 793e9e14, StubHeap: 793e9e6c,
Assembly: 0015aa68 [mscorlib], ClassLoader: 0015ab40

Shared Domain: 793eb278, LowFrequencyHeap: 793eb2dc,
HighFrequencyHeap: 793eb334, StubHeap: 793eb38c,
Assembly: 0015aa68 [mscorlib], ClassLoader: 0015ab40

Domain 1: 149100, LowFrequencyHeap: 00149164,
HighFrequencyHeap: 001491bc, StubHeap: 00149214,
Name: Sample1.exe, Assembly: 00164938 [Sample1],
ClassLoader: 00164a78
Figure 3 Sample1.exe
using System;

public interface MyInterface1
{
    void Method1();
    void Method2();
}
public interface MyInterface2
{
    void Method2();
    void Method3();
}

class MyClass : MyInterface1, MyInterface2
{
    public static string str = "MyString";
    public static uint   ui = 0xAAAAAAAA;
    public void Method1() { Console.WriteLine("Method1"); }
    public void Method2() { Console.WriteLine("Method2"); }
    public virtual void Method3() { Console.WriteLine("Method3"); }
}

class Program
{
    static void Main()
    {
        MyClass mc = new MyClass();
        MyInterface1 mi1 = mc;
        MyInterface2 mi2 = mc;

        int i = MyClass.str.Length;
        uint j = MyClass.ui;

        mc.Method1();
        mi1.Method1();
        mi1.Method2();
        mi2.Method2();
        mi2.Method3();
        mc.Method3();
    }
}
Our console program, Sample1.exe, is loaded into an AppDomain which has a name “Sample1.exe.” Mscorlib.dll is loaded into the SharedDomain but it is also listed against the SystemDomain as it is the core system library. A HighFrequencyHeap, LowFrequencyHeap, and StubHeap are allocated in each domain. The SystemDomain and the SharedDomain use the same ClassLoader, while the Default AppDomain uses its own.
The output does not show the reserved and committed sizes of the loader heaps. The HighFrequencyHeap initial reserve size is 32KB and its commit size is 4KB. LowFrequencyHeap and StubHeaps are initially reserved with 8KB and committed at 4KB. Also not shown in the SOS output is the InterfaceVtableMap heap. Each domain has a InterfaceVtableMap (referred to here as IVMap) that is created on its own LoaderHeap during the domain initialization phase. The IVMap heap is reserved at 4KB and is committed at 4KB initially. We’ll discuss the significance of IVMap while exploring type layout in subsequent sections.
Figure 2 shows the default Process Heap, JIT Code Heap, GC Heap (for small objects) and Large Object Heap (for objects with size 85000 or more bytes) to illustrate the semantic difference between these and the loader heaps. The just-in-time (JIT) compiler generates x86 instructions and stores them on the JIT Code Heap. GC Heap and Large Object are the garbage-collected heaps on which managed objects are instantiated.
Type Fundamentals
A type is the fundamental unit of programming in .NET. In C#, a type can be declared using the class, struct, and interface keywords. Most types are explicitly created by the programmer, however, in special interoperability cases and remote object invocation (.NET Remoting) scenarios, the .NET CLR implicitly generates types. These generated types include COM and Runtime Callable Wrappers and Transparent Proxies.
We’ll explore .NET type fundamentals by starting from a stack frame that contains an object reference (typically, the stack is one of the locations from which an object instance begins life). The code shown in Figure 4 contains a simple program with a console entry point that calls a static method. Method1 creates an instance of type SmallClass which contains a byte array used in demonstrating the creation of an object instance on a Large Object Heap. The code is trivial, but will serve for our discussion.
Figure 4 Large Objects and Small Objects
using System;

class SmallClass
{
    private byte[] _largeObj;
    public SmallClass(int size)
    {
        _largeObj = new byte[size];
        _largeObj[0] = 0xAA;
        _largeObj[1] = 0xBB;
        _largeObj[2] = 0xCC;
    }

    public byte[] LargeObj
    {
        get { return this._largeObj; }
    }
}

class SimpleProgram
{
    static void Main(string[] args)
    {
        SmallClass smallObj = SimpleProgram.Create(84930,10,15,20,25);
        return;
    }

    static SmallClass Create(int size1, int size2, int size3,
        int size4, int size5)
    {
        int objSize = size1 + size2 + size3 + size4 + size5;
        SmallClass smallObj = new SmallClass(objSize);
        return smallObj;
    }
}
Figure 5 shows snapshot of a typical fastcall stack frame stopped at a breakpoint at the “return smallObj;” line inside the Create method. (Fastcall is the .NET calling convention which specifies that arguments to functions are to be passed in registers, when possible, with all other arguments passed on the stack right to left and popped later by the called function.) The value type local variable objSize is inlined within the stack frame. Reference type variables like smallObj are stored as a fixed size (a 4-byte DWORD) on the stack and contain the address of object instances allocated on the normal GC Heap. In traditional C++, this is an object pointer; in the managed world it’s an object reference. Nonetheless, it contains the address of an object instance. We’ll use the term ObjectInstance for the data structure located at the address pointed to by the object reference.

Figure 5 SimpleProgram Stack Frame and Heaps
The smallObj object instance on the normal GC Heap contains a Byte[] called _largeObj, whose size is 85000 bytes (note that the figure shows 85016 bytes, which is the actual storage size). The CLR treats objects with sizes greater than or equal to 85000 bytes differently than the smaller objects. Large objects are allocated on a Large Object Heap (LOH), while smaller objects are created on a normal GC Heap, which optimizes the object allocation and garbage collection. The LOH is not compacted, whereas the GC Heap is compacted whenever a GC collection occurs. Moreover, the LOH is only collected on full GC collections.
The ObjectInstance of smallObj contains the TypeHandle that points to the MethodTable of the corresponding type. There will be one MethodTable for each declared type and all the object instances of the same type will point to the same MethodTable. This will contain information about the kind of type (interface, abstract class, concrete class, COM Wrapper, and proxy), the number of interfaces implemented, the interface map for method dispatch, the number of slots in the method table, and a table of slots that point to the implementations.
One important data structure MethodTable points to is EEClass. The CLR class loader creates EEClass from the metadata before MethodTable is laid out. In Figure 4, SmallClass’s MethodTable points to its EEClass. These structures point to their modules and assemblies. MethodTable and EEClass are typically allocated on the domain-specific loader heaps. Byte[] is a special case; the MethodTable and the EEClass are allocated on the loader heaps of the SharedDomain. Loader heaps are AppDomain-specific and any data structures already mentioned here, once loaded, will not go away until an AppDomain is unloaded. Also, the default AppDomain can’t be unloaded and hence the code lives until the CLR is shut down.
ObjectInstance
As we mentioned, all instances of value types are either inlined on the thread stack or inlined on the GC Heap. All reference types are created on the GC Heap or LOH. Figure 6shows a typical object instance layout. An object can be referenced from stack-based local variables, handle tables in the interop or P/Invoke scenarios, from registers (the this pointer and method arguments while executing a method), or from the finalizer queue for objects having finalizer methods. The OBJECTREF does not point to the beginning of the Object Instance but at a DWORD offset (4 bytes). The DWORD is called Object Header and holds an index (a 1-based syncblk number) into a SyncTableEntry table. As the chaining is through an index, the CLR can move the table around in memory while increasing the size as needed. The SyncTableEntry maintains a weak reference back to the object so that the SyncBlock ownership can be tracked by the CLR. Weak references enable the GC to collect the object when no other strong references exist. SyncTableEntry also stores a pointer to SyncBlock that contains useful information, but is rarely needed by all instances of an object. This information includes the object’s lock, its hash code, any thunking data, and its AppDomain index. For most object instances, there will be no storage allocated for the actual SyncBlock and the syncblk number will be zero. This will change when the execution thread hits statements like lock(obj) or obj.GetHashCode, as shown here:

SmallClass obj = new SmallClass()
// Do some work here
lock(obj) { /* Do some synchronized work here */ }
obj.GetHashCode();

Figure 6 Object Instance Layout
In this code, smallObj will use zero (no syncblk) as its starting syncblk number. The lock statement causes the CLR to create a syncblk entry and update the object header with the corresponding number. As the C# lock keyword expands to a try-finally that makes use of the Monitor class, a Monitor object is created on the syncblk for synchronization. A call to the GetHashCode method populates the syncblk with the object hash code.
There are other fields in the SyncBlock that are used in COM interop and for marshaling delegates to unmanaged code, but which are not relevant for a typical object usage.
TypeHandle follows the syncblk number in the ObjectInstance. In order to maintain continuity, I will discuss TypeHandle after elaborating on the instances variables. A variable list of instance fields follows the TypeHandle. By default, the instance fields will be packed in such a way that memory is used efficiently and padding is minimized for alignment. The code in Figure 7 shows a SimpleClass that has a bunch of instance variables with varying sizes contained in it.
Figure 7 SimpleClass with Instance Variables
class SimpleClass
{
    private byte b1 = 1;                // 1 byte
    private byte b2 = 2;                // 1 byte
    private byte b3 = 3;                // 1 byte
    private byte b4 = 4;                // 1 byte
    private char c1 = 'A';              // 2 bytes
    private char c2 = 'B';              // 2 bytes
    private short s1 = 11;              // 2 bytes
    private short s2 = 12;              // 2 bytes
    private int i1 = 21;                // 4 bytes
    private long l1 = 31;               // 8 bytes
    private string str = "MyString"; // 4 bytes (only OBJECTREF)

    //Total instance variable size = 28 bytes 

    static void Main()
    {
        SimpleClass simpleObj = new SimpleClass();
        return;
    }
}
Figure 8 shows an example of a SimpleClass object instance in the Visual Studio debugger memory window. We set a breakpoint on the return statement in Figure 7 and used the address of the simpleObj contained in the ECX register to display object instance in the memory window. The first 4-byte block is the syncblk number. As we didn’t use the instance in any synchronizing code (or access its HashCode), this is set to 0. The object reference, as stored in the stack variable, points to 4 bytes starting at offset 4. The Byte variables b1, b2, b3, and b4 are all packed side by side. Both of the short variables, s1 and s2, are packed together. The String variable str is a 4-byte OBJECTREF that points to the actual instance of the string located on the GC Heap. String is a special type in that all instances containing the same literal will be made to point to the same instance in a global string table during the assembly loading process. This process is called string interning and is designed to optimize memory usage. As we mentioned previously, in the .NET Framework 1.1, an assembly cannot opt out of this interning process, although future versions of CLR may provide this capability.

Figure 8 Debugger Memory Window for Object Instance
So the lexical sequence of member variables in the source code is not maintained in memory by default. In interop scenarios where lexical sequence has to be carried forward into memory, the StructLayoutAttribute can be used, which takes a LayoutKind enumeration as the argument. LayoutKind.Sequential will maintain the lexical sequence for the marshaled data, though in the .NET Framework 1.1 it will not affect the managed layout (however, in the .NET Framework 2.0, it will). In interop scenarios where you really need to have extra padding and explicit control of the field sequence, LayoutKind.Explicit can be combined with FieldOffset decoration at the field level.
Having looked at the raw memory contents, let’s use SOS to look at the object instance. One useful command is DumpHeap, which allows listing of all the heap contents and all the instances of a particular type. Instead of relying on the registers, DumpHeap can show the address of the only instance we created:

!DumpHeap -type SimpleClass
Loaded Son of Strike data table version 5 from
"C:WINDOWSMicrosoft.NETFrameworkv1.1.4322mscorwks.dll"
 Address       MT     Size
00a8197c 00955124       36
Last good object: 00a819a0
total 1 objects
Statistics:
      MT    Count TotalSize Class Name
  955124        1        36 SimpleClass
The total size of the object is 36 bytes. No matter how large the string is, instances of SimpleClass contain only DWORD OBJECTREF. SimpleClass’s instance variables only occupy 28 bytes. The remaining 8 bytes are comprised of the TypeHandle (4 bytes) and the syncblk number (4 bytes). Having found the address of the instance simpleObj, let’s dump the contents of this instance using the DumpObj command, as shown here:

!DumpObj 0x00a8197c
Name: SimpleClass
MethodTable 0x00955124
EEClass 0x02ca33b0
Size 36(0x24) bytes
FieldDesc*: 00955064
      MT    Field   Offset                 Type       Attr    Value Name
00955124  400000a        4         System.Int64   instance      31 l1
00955124  400000b        c                CLASS   instance 00a819a0 str
    << some fields omitted from the display for brevity >>
00955124  4000003       1e          System.Byte   instance        3 b3
00955124  4000004       1f          System.Byte   instance        4 b4
As noted, the default layout generated for classes by the C# compiler is LayoutType.Auto (for structs, LayoutType.Sequential is used); hence the class loader rearranged the instance fields to minimize the padding. We can use ObjSize to dump the graph that includes the space taken up by the instance, str. Here’s the output:

!ObjSize 0x00a8197c
sizeof(00a8197c) =       72 (    0x48) bytes (SimpleClass)
Son of Strike
The SOS debugger extension is used to display the contents of CLR data structures in this article. It’s part of the .NET Framework installation and is located at %windir%Microsoft.NETFrameworkv1.1.4322. Before you load SOS into the process, enable managed debugging from the project properties in Visual Studio .NET. Add the directory in which SOS.dll is located to the PATH environment variable. To load SOS.dll, while at a breakpoint, open Debug | Windows | Immediate. In the immediate window, execute .load sos.dll. Use !help to get a list of debugger commands. For more information on SOS, see the June 2004 Bugslayer column.
If you subtract the size of the SimpleClass instance (36 bytes) from the overall size of the object graph (72 bytes), you should get the size of the str—that is, 36 bytes. Let’s verify this by dumping the str instance. Here’s the output:

!DumpObj 0x00a819a0
Name: System.String
MethodTable 0x009742d8
EEClass 0x02c4c6c4
Size 36(0x24) bytes

If you add the size of the string instance str (36 bytes) to the size of SimpleClass instance (36 bytes), you get a total size of 72 bytes, as reported by the ObjSize command.

Note that ObjSize will not include the memory taken up by the syncblk infrastructure. Also, in the .NET Framework 1.1, the CLR is not aware of the memory taken up by any unmanaged resources like GDI objects, COM objects, file handles, and so on; hence, they will not be reported by this command.
TypeHandle, a pointer to the MethodTable, is located right after the syncblk number. Before an object instance is created, the CLR looks up the loaded types, loads the type if not found, obtains the MethodTable address, creates the object instance, and populates the object instance with the TypeHandle value. The JIT compiler-generated code uses TypeHandle to locate the MethodTable for method dispatching. The CLR uses TypeHandle whenever it has to backtrack to the loaded type through MethodTable.
MethodTable
Each class and interface, when loaded into an AppDomain, will be represented in memory by a MethodTable data structure. This is a result of the class-loading activity before the first instance of the object is ever created. While ObjectInstance represents the state, MethodTable represents the behavior. MethodTable binds the object instance to the language compiler-generated memory-mapped metadata structures through EEClass. The information in the MethodTable and the data structures hanging off it can be accessed from managed code through System.Type. A pointer to the MethodTable can be acquired even in managed code through the Type.RuntimeTypeHandle property. TypeHandle, which is contained in the ObjectInstance, points to an offset from the beginning of the MethodTable. This offset is 12 bytes by default and contains GC information which we will not discuss here.
Figure 9 shows the typical layout of the MethodTable. We’ll show some of the important fields of the TypeHandle, but for a more complete list, look at the figure. Let’s start with the Base Instance Size as it has direct correlation to the runtime memory profile.

Figure 9 MethodTable Layout
Base Instance Size
The Base Instance Size is the size of the object as computed by the class loader, based on the field declarations in the code. As discussed previously, the current GC implementation needs an object instance of at least 12 bytes. If a class does not have any instance fields defined, it will carry an overhead of 4 bytes. The rest of the 8 bytes will be taken up by the Object Header (which may contain a syncblk number) and TypeHandle. Again, the size of the object can be influenced by a StructLayoutAttribute.
Look at the memory snapshot (Visual Studio .NET 2003 memory window) of a MethodTable for MyClass from Figure 3 (MyClass with two interfaces) and compare it with SOS-generated output. In Figure 9, the object size is located at a 4-byte offset and the value is 12 (0x0000000C) bytes. The following is the output of DumpHeap from SOS:

!DumpHeap -type MyClass
 Address       MT     Size
00a819ac 009552a0       12
total 1 objects
Statistics:
    MT  Count TotalSize Class Name
9552a0      1        12    MyClass
Method Slot Table
Embedded within the MethodTable is a table of slots that point to the respective method descriptors (MethodDesc), enabling the behavior of the type. The Method Slot Table is created based on the linearized list of implementation methods laid out in the following order: Inherited virtuals, Introduced virtuals, Instance Methods, and Static Methods.
The ClassLoader walks through the metadata of the current class, parent class, and interfaces, and creates the method table. In the layout process, it replaces any overridden virtual methods, replaces any parent class methods being hidden, creates new slots, and duplicates slots as necessary. The duplication of slots is necessary to create an illusion that each interface has its own mini vtable. However, the duplicated slots point to the same physical implementation. MyClass has three instance methods, a class constructor (.cctor), and an object constructor (.ctor). The object constructor is automatically generated by the C# compiler for all objects having no constructors explicitly defined. Class constructor is generated by the compiler as we have a static variable defined and initialized. Figure 10 shows the layout of the method table for MyClass. The layout shows 10 methods because of the duplication of Method2 slot for IVMap, which will be covered next. Figure 11 shows the edited SOS dump of MyClass’s method table.

Figure 10 MyClass MethodTable Layout
Figure 11 SOS Dump of MyClass Method Table
!DumpMT -MD 0x9552a0
  Entry  MethodDesc  Return Type       Name
0097203b 00972040    String            System.Object.ToString()
009720fb 00972100    Boolean           System.Object.Equals(Object)
00972113 00972118    I4                System.Object.GetHashCode()
0097207b 00972080    Void              System.Object.Finalize()
00955253 00955258    Void              MyClass.Method1()
00955263 00955268    Void              MyClass.Method2()
00955263 00955268    Void              MyClass.Method2()
00955273 00955278    Void              MyClass.Method3()
00955283 00955288    Void              MyClass..cctor()
00955293 00955298    Void              MyClass..ctor()
The first four methods of any type will always be ToString, Equals, GetHashCode, and Finalize. These are virtual methods inherited from System.Object. The Method2 slot is duplicated, but both point to the same method descriptor. The explicitly coded .cctor and .ctor will be grouped with static methods and instance methods, respectively.
MethodDesc
Method Descriptor (MethodDesc) is an encapsulation of method implementation as the CLR knows it. There are several types of Method Descriptors that facilitate the calls to a variety of interop implementations, in addition to managed implementations. In this article we will only look at the managed MethodDesc in the context of the code shown in Figure 3. A MethodDesc is generated as a part of the class loading process and initially points to Intermediate Language (IL). Each MethodDesc is padded with a PreJitStub, which is responsible for triggering JIT compilation. Figure 12 shows a typical layout. The method table slot entry actually points to the stub instead of the actual MethodDesc data structure. This is at a negative offset of 5 bytes from the actual MethodDesc and is part of the 8-byte padding every method inherits. The 5 bytes contain instructions for a call to the PreJitStub routine. This 5-byte offset can be seen from the DumpMT output (of MyClass in Figure 11) of SOS, as MethodDesc is always 5 bytes after the location pointed to by the Method Slot Table entry. Upon the first invocation, a call to the JIT compilation routine is made. After the compilation is complete, the 5 bytes containing the call instruction will be overwritten with an unconditional jump to the JIT-compiled x86 code.

Figure 12 Method Descriptor
Disassembly of the code pointed to by the Method Table Slot entry in Figure 12will show the call to the PreJitStub. Here’s an abridged display of the disassembly before JIT for Method 2:

!u 0x00955263
Unmanaged code
00955263 call        003C3538        ;call to the jitted Method2()
00955268 add         eax,68040000h   ;ignore this and the rest
                                     ;as !u thinks it as code

Now let’s execute the method and disassemble the same address:

!u 0x00955263
Unmanaged code
00955263 jmp     02C633E8        ;call to the jitted Method2()
00955268 add     eax,0E8040000h  ;ignore this and the rest
                                 ;as !u thinks it as code

Only the first 5 bytes at the address is code; the rest contains data of Method2’s MethodDesc. The “!u” command is unaware of this and generates gibberish, so you can ignore anything after the first 5 bytes.

CodeOrIL before JIT compilation contains the Relative Virtual Address (RVA) of the method implementation in IL. This field is flagged to indicate that it is IL. The CLR updates this field with the address of the JITed code after on-demand compilation. Let’s pick a method from the ones listed and dump the MethodDesc using DumpMT command before and after JIT compilation:

!DumpMD 0x00955268
Method Name : [DEFAULT] [hasThis] Void MyClass.Method2()
MethodTable 9552a0
Module: 164008
mdToken: 06000006
Flags : 400
IL RVA : 00002068

After compilation, MethodDesc looks like this:

!DumpMD 0x00955268
Method Name : [DEFAULT] [hasThis] Void MyClass.Method2()
MethodTable 9552a0
Module: 164008
mdToken: 06000006
Flags : 400
Method VA : 02c633e8

The Flags field in the method descriptor is encoded to contain the information about the type of the method, such as static, instance, interface method, or COM implementation.

Let’s see another complicated aspect of MethodTable: Interface implementation. It’s made to look simple to the managed environment by absorbing all the complexity into the layout process. Next, we’ll show how the interfaces are laid out and how interface-based method dispatching really works.
Interface Vtable Map and Interface Map
At offset 12 in the MethodTable is an important pointer, the IVMap. As shown in Figure 9, IVMap points to an AppDomain-level mapping table that is indexed by a process-level interface ID. The interface ID is generated when the interface type is first loaded. Each interface implementation will have an entry in IVMap. If MyInterface1 is implemented by two classes, there will be two entries in the IVMap table. The entry will point back to the beginning of the sub-table embedded within the MyClass method table, as shown in Figure 9. This is the reference with which the interface-based method dispatching occurs. IVMap is created based on the Interface Map information embedded within the method table. Interface Map is created based on the metadata of the class during the MethodTable layout process. Once typeloading is complete, only IVMap is used in method dispatching.
The Interface Map at offset 28 will point to the InterfaceInfo entries embedded within the MethodTable. In this case, there are two entries for each of the two interfaces implemented by MyClass. The first 4 bytes of the first InterfaceInfo entry points to the TypeHandle of MyInterface1 (see Figure 9 and Figure 10). The next WORD (2 bytes) is taken up by Flags (where 0 is inherited from parent, and 1 is implemented in the current class). The WORD right after Flags is Start Slot, which is used by the class loader to lay out the interface implementation sub-table. For MyInterface1, the value is 4, which means that slots 5 and 6 point to the implementation. For MyInterface2, the value is 6, so slots 7 and 8 point to the implementation. ClassLoader duplicates the slots if necessary to create the illusion that each interface gets its own implementation while physically mapping to the same method descriptor. In MyClass, MyInterface1.Method2 and MyInterface2.Method2 will point to the same implementation.
Interface-based method dispatching occurs through IVMap, while direct method dispatch occurs through the MethodDesc address stored at the respective slot. As mentioned earlier, the .NET Framework uses the fastcall calling convention. The first two arguments are typically passed through ECX and EDX registers, if possible. This first argument of the instance method is always a this pointer, which is passed through the ECX register as shown by the “mov ecx, esi” statement:

mi1.Method1();
mov    ecx,edi                 ;move "this" pointer into ecx
mov    eax,dword ptr [ecx]     ;move "TypeHandle" into eax
mov    eax,dword ptr [eax+0Ch] ;move IVMap address into eax at offset 12
mov    eax,dword ptr [eax+30h] ;move the ifc impl start slot into eax
call   dword ptr [eax]         ;call Method1

mc.Method1();
mov    ecx,esi                 ;move "this" pointer into ecx
cmp    dword ptr [ecx],ecx     ;compare and set flags
call   dword ptr ds:[009552D8h];directly call Method1
These disassemblies show that the direct call to MyClass’s instance method does not use offset. The JIT compiler writes the address of the MethodDesc directly into the code. Interface-based dispatch happens through IVMap and requires a few extra instructions than the direct dispatch. One is used to fetch the address of the IVMap, and the other to fetch the start slot of the interface implementation within the Method SlotTable. Also, casting an object instance to an interface merely copies the this pointer to the target variable. In Figure 2, the statement “mi1 = mc;” uses a single instruction to copy the OBJECTREF in mc to mi1.
Virtual Dispatch
Let’s look now at Virtual Dispatch and compare it with direct and interface-based dispatch. Here is the disassembly for a virtual method call to MyClass.Method3 from Figure 3:

mc.Method3();
Mov    ecx,esi               ;move "this" pointer into ecx
Mov    eax,dword ptr [ecx]   ;acquire the MethodTable address
Call   dword ptr [eax+44h]   ;dispatch to the method at offset 0x44

Virtual dispatch always occurs through a fixed slot number, irrespective of the MethodTable pointer in a given implementation class (type) hierarchy. During the MethodTable layout, ClassLoader replaces the parent implementation with the overriding child implementation. As a result, method calls coded against the parent object get dispatched to the child object’s implementation. The disassembly shows that the dispatch occurs through slot number 8 in the debugger memory window (as seen in Figure 10) as well as the DumpMT output.

Static Variables
Static variables are an important constituent part of the MethodTable data structure. They are allocated as a part of the MethodTable right after the method table slot array. All the primitive static types are inlined while the static value objects like structs and reference types are referred through OBJECTREFs created in the handle tables. OBJECTREF in the MethodTable refers to OBJECTREF in the AppDomain handle table, which refers to the heap-created object instance. Once created, OBJECTREF in the handle table will keep the object instance on the heap alive until the AppDomain is unloaded. In Figure 9, a static string variable, str, points to OBJECTREF on the handle table, which points to MyString on the GC Heap.
EEClass
EEClass comes to life before the MethodTable is created and, when combined with MethodTable, is the CLR version of a type declaration. In fact, EEClass and MethodTable are logically one data structure (together they represent a single type), and were split based on frequency of use. Fields that get used a lot are in MethodTable, while fields that get used infrequently are in EEClass. Thus information (like names, fields, and offsets) needed to JIT compile functions end up in EEClass, however info needed at run time (like vtable slots and GC information) are in MethodTable.
There will be one EEClass for each type loaded into an AppDomain. This includes interface, class, abstract class, array, and struct. Each EEClass is a node of a tree tracked by the execution engine. CLR uses this network to navigate through the EEClass structures for purposes including class loading, MethodTable layout, type verification, and type casting. The child-parent relationship between EEClasses is established based on the inheritance hierarchy, whereas parent-child relationships are established based on the combination of inheritance hierarchy and class loading sequence. New EEClass nodes get added, node relationships get patched, and new relationships get established as the execution of the managed code progresses. There is also a horizontal relationship with sibling EEClasses in the network. EEClass has three fields to manage the node relationships between loaded types: ParentClass, SiblingChain, and ChildrenChain. Refer to Figure 13 for the schematics of EEClass in the context of MyClass from Figure 4.
Figure 13 shows only a few of the fields relevant to this discussion. Because we’ve omitted some fields in the layout, we have not really shown the offsets in this figure. EEClass has a circular reference to MethodTable. EEClass also points MethodDesc chunks allocated on HighFrequencyHeap of the default AppDomain. A reference to a list of FieldDesc objects allocated on the process heap provides field layout information during MethodTable construction. EEClass is allocated on the LowFrequencyHeap of the AppDomain so that the operating system can better perform page management of memory, thereby reducing the working set.

Figure 13 EEClass Layout
Other fields shown in Figure 13 are self-explanatory in the context of MyClass (Figure 3). Let’s look now at the real physical memory by dumping the EEClass using SOS. Run the program from Figure 3after setting a breakpoint on the line, mc.Method1. First obtain the address of EEClass for MyClass using the command Name2EE:

!Name2EE C:WorkingtestClrInternalsSample1.exe MyClass

MethodTable: 009552a0
EEClass: 02ca3508
Name: MyClass

The first argument to Name2EE is the module name that can be obtained from DumpDomain command. Now that we have the address of the EEClass, we’ll dump the EEClass itself:

!DumpClass 02ca3508
Class Name : MyClass, mdToken : 02000004, Parent Class : 02c4c3e4
ClassLoader : 00163ad8, Method Table : 009552a0, Vtable Slots : 8
Total Method Slots : a, NumInstanceFields: 0,
NumStaticFields: 2,FieldDesc*: 00955224

      MT    Field   Offset  Type           Attr    Value    Name
009552a0  4000001   2c      CLASS          static 00a8198c  str
009552a0  4000002   30      System.UInt32  static aaaaaaaa  ui
Figure 13 and the DumpClass output look essentially the same. Metadata token (mdToken) represents the MyClass index in the memory mapped metadata tables of the module PE file, and the Parent class points to System.Object. Sibling Chain (Figure 13) shows that it is loaded as a result of the loading of the Program class.
MyClass has eight vtable slots (methods that can be virtually dispatched). Even though Method1 and Method2 are not virtual, they will be considered virtual methods when dispatched through interfaces and added to the list. Add .cctor and .ctor to the list, and you get 10 (0?a) total methods. The class has two static fields that are listed at the end. MyClass has no instance fields. The rest of the fields are self-explanatory.
Conclusion
That concludes our tour of the some of the most important internals of the CLR. Obviously, there’s much more to be covered, and in much more depth, but we hope this has given you a glimpse into how things work. Much of the information presented here will likely change with subsequent releases of the CLR and the .NET Framework. But although the CLR data structures covered in this article may change, the concepts should remain the same.
Hanu Kommalapati is an Architect at Microsoft Gulf Coast District (Houston). In his current role at Microsoft, he helps enterprise customers in building scalable component frameworks based on the .NET Framework. He can be reached at hanuk@microsoft.com.
Tom Christian is an Escalation Engineer with Developer Support at Microsoft, working with ASP.NET and the .NET debugger extension for WinDBG (sos/psscor). He is based in Charlotte, NC and can be contacted at tomchris@microsoft.com.

深入探索.NET框架内部了解CLR如何创建运行时对象

本文讨论:

SystemDomain, SharedDomain, and DefaultDomain
对象布局和内存细节。
方法表布局。
方法分派(Method dispatching)。

本文使用下列技术: .NET Framework, C#

本页内容
 CLR启动程序(Bootstrap)创建的域 CLR启动程序(Bootstrap)创建的域
系统域(System Domain) 系统域(System Domain)
共享域(Shared Domain) 共享域(Shared Domain)
默认域(Default Domain) 默认域(Default Domain)
加载器堆(Loader Heaps) 加载器堆(Loader Heaps)
类型原理 类型原理
对象实例 对象实例
方法表 方法表
基实例大小 基实例大小
方法槽表(Method Slot Table) 方法槽表(Method Slot Table)
方法描述(MethodDesc) 方法描述(MethodDesc)
接口虚表图和接口图 接口虚表图和接口图
虚分派(Virtual Dispatch) 虚分派(Virtual Dispatch)
静态变量 静态变量
EEClass EEClass
Conclusion结论 Conclusion结论

随着通用语言运行时(CLR)即将成为在Windows®下开发应用程序的首选架构,对其进行深入理解会帮助你建立有效的工业强度的应用程序。在本文中,我们将探索CLR内部,包括对象实例布局,方法表布局,方法分派,基于接口的分派和不同的数据结构。

我们将使用C#编写的简单代码示例,以便任何固有的语言语法含义是C#的缺省定义。某些此处讨论的数据结构和算法可能会在Microsoft® .NET Framework 2.0中改变,但是主要概念应该保持不变。我们使用Visual Studio® .NET 2003调试器和调试器扩展Son of Strike (SOS)来查看本文讨论的数据结构。SOS理解CLR的内部数据结构并输出有用信息。请参考“Son of Strike”补充资料,了解如何将SOS.dll装入Visual Studio .NET 2003调试器的进程空间。本文中,我们将描述在共享源代码CLI(Shared Source CLI,SSCLI)中有相应实现的类,你可以从msdn.microsoft.com/net/sscli下载。图1将帮助你在SSCLI的数以兆计的代码中找到所参考的结构。

在我们开始前,请注意:本文提供的信息只对在X86平台上运行的.NET Framework 1.1有效(对于Shared Source CLI 1.0也大部分适用,只是在某些交互操作的情况下必须注意例外),对于.NET Framework 2.0会有改变,所以请不要在构建软件时依赖于这些内部结构的不变性。

CLR启动程序(Bootstrap)创建的域

在CLR执行托管代码的第一行代码前,会创建三个应用程序域。其中两个对于托管代码甚至CLR宿主程序(CLR hosts)都是不可见的。它们只能由CLR启动进程创建,而提供CLR启动进程的是shim——mscoree.dll和mscorwks.dll (在多处理器系统下是mscorsvr.dll)。正如2所示,这些域是系统域(System Domain)和共享域(Shared Domain),都是使用了单件(Singleton)模式。第三个域是缺省应用程序域(Default AppDomain),它是一个AppDomain的实例,也是唯一的有命名的域。对于简单的CLR宿主程序,比如控制台程序,默认的域名由可执行映象文件的名字组成。其它的域可以在托管代码中使用AppDomain.CreateDomain方法创建,或者在非托管的代码中使用ICORRuntimeHost接口创建。复杂的宿主程序,比如ASP.NET,对于特定的网站会基于应用程序的数目创建多个域。

2 由CLR启动程序创建的域

系统域(System Domain)

系统域负责创建和初始化共享域和默认应用程序域。它将系统库mscorlib.dll载入共享域,并且维护进程范围内部使用的隐含或者显式字符串符号。

字符串驻留(string interning)是.NET Framework 1.1中的一个优化特性,它的处理方法显得有些笨拙,因为CLR没有给程序集机会选择此特性。尽管如此,由于在所有的应用程序域中对一个特定的符号只保存一个对应的字符串,此特性可以节省内存空间。

系统域还负责产生进程范围的接口ID,并用来创建每个应用程序域的接口虚表映射图(InterfaceVtableMaps)的接口。系统域在进程中保持跟踪所有域,并实现加载和卸载应用程序域的功能。

共享域(Shared Domain)

所有不属于任何特定域的代码被加载到系统库SharedDomain.Mscorlib,对于所有应用程序域的用户代码都是必需的。它会被自动加载到共享域中。系统命名空间的基本类型,如Object, ValueType, Array, Enum, String, and Delegate等等,在CLR启动程序过程中被预先加载到本域中。用户代码也可以被加载到这个域中,方法是在调用CorBindToRuntimeEx时使用由CLR宿主程序指定的LoaderOptimization特性。控制台程序也可以加载代码到共享域中,方法是使用System.LoaderOptimizationAttribute特性声明Main方法。共享域还管理一个使用基地址作为索引的程序集映射图,此映射图作为管理共享程序集依赖关系的查找表,这些程序集被加载到默认域(DefaultDomain)和其它在托管代码中创建的应用程序域。非共享的用户代码被加载到默认域。

默认域(Default Domain)

默认域是应用程序域(AppDomain)的一个实例,一般的应用程序代码在其中运行。尽管有些应用程序需要在运行时创建额外的应用程序域(比如有些使用插件,plug-in,架构或者进行重要的运行时代码生成工作的应用程序),大部分的应用程序在运行期间只创建一个域。所有在此域运行的代码都是在域层次上有上下文限制。如果一个应用程序有多个应用程序域,任何的域间访问会通过.NET Remoting代理。额外的域内上下文限制信息可以使用System.ContextBoundObject派生的类型创建。每个应用程序域有自己的安全描述符(SecurityDescriptor),安全上下文(SecurityContext)和默认上下文(DefaultContext),还有自己的加载器堆(高频堆,低频堆和代理堆),句柄表,接口虚表管理器和程序集缓存。

加载器堆(Loader Heaps)

加载器堆的作用是加载不同的运行时CLR部件和优化在域的整个生命期内存在的部件。这些堆的增长基于可预测块,这样可以使碎片最小化。加载器堆不同于垃圾回收堆(或者对称多处理器上的多个堆),垃圾回收堆保存对象实例,而加载器堆同时保存类型系统。经常访问的部件如方法表,方法描述,域描述和接口图,分配在高频堆上,而较少访问的数据结构如EEClass和类加载器及其查找表,分配在低频堆。代理堆保存用于代码访问安全性(code access security, CAS)的代理部件,如COM封装调用和平台调用(P/Invoke)。

从高层次了解域后,我们准备看看它们在一个简单的应用程序的上下文中的物理细节,见图3。我们在程序运行时停在mc.Method1(),然后使用SOS调试器扩展命令DumpDomain来输出域的信息。(请查看Son of Strike了解SOS的加载信息)。这里是编辑后的输出:

!DumpDomain
System Domain: 793e9d58, LowFrequencyHeap: 793e9dbc,
HighFrequencyHeap: 793e9e14, StubHeap: 793e9e6c,
Assembly: 0015aa68 [mscorlib], ClassLoader: 0015ab40
Shared Domain: 793eb278, LowFrequencyHeap: 793eb2dc,
HighFrequencyHeap: 793eb334, StubHeap: 793eb38c,
Assembly: 0015aa68 [mscorlib], ClassLoader: 0015ab40
Domain 1: 149100, LowFrequencyHeap: 00149164,
HighFrequencyHeap: 001491bc, StubHeap: 00149214,
Name: Sample1.exe, Assembly: 00164938 [Sample1],
ClassLoader: 00164a78

我们的控制台程序,Sample1.exe,被加载到一个名为“Sample1.exe”的应用程序域。Mscorlib.dll被加载到共享域,不过因为它是核心系统库,所以也在系统域中列出。每个域会分配一个高频堆,低频堆和代理堆。系统域和共享域使用相同的类加载器,而默认应用程序使用自己的类加载器。

输出没有显示加载器堆的保留尺寸和已提交尺寸。高频堆的初始化大小是32KB,每次提交4KB。SOS的输出也没有显示接口虚表堆(InterfaceVtableMap)。每个域有一个接口虚表堆(简称为IVMap),由自己的加载器堆在域初始化阶段创建。IVMap保留大小是4KB,开始时提交4KB。我们将会在后续部分研究类型布局时讨论IVMap的意义。

2显示默认的进程堆,JIT代码堆,GC堆(用于小对象)和大对象堆(用于大小等于或者超过85000字节的对象),它说明了这些堆和加载器堆的语义区别。即时(just-in-time, JIT)编译器产生x86指令并且保存到JIT代码堆中。GC堆和大对象堆是用于托管对象实例化的垃圾回收堆。

类型原理

类型是.NET编程中的基本单元。在C#中,类型可以使用class,struct和interface关键字进行声明。大多数类型由程序员显式创建,但是,在特别的交互操作(interop)情形和远程对象调用(.NET Remoting)场合中,.NET CLR会隐式的产生类型,这些产生的类型包含COM和运行时可调用封装及传输代理(Runtime Callable Wrappers and Transparent Proxies)。

我们通过一个包含对象引用的栈开始研究.NET类型原理(典型地,栈是一个对象实例开始生命期的地方)。4中显示的代码包含一个简单的程序,它有一个控制台的入口点,调用了一个静态方法。Method1创建一个SmallClass的类型实例,该类型包含一个字节数组,用于演示如何在大对象堆创建对象。尽管这是一段无聊的代码,但是可以帮助我们进行讨论。

5显示了停止在Create方法“return smallObj;”代码行断点时的fastcall栈结构(fastcall时.NET的调用规范,它说明在可能的情况下将函数参数通过寄存器传递,而其它参数按照从右到左的顺序入栈,然后由被调用函数完成出栈操作)。本地值类型变量objSize内含在栈结构中。引用类型变量如smallObj以固定大小(4字节DWORD)保存在栈中,包含了在一般GC堆中分配的对象的地址。对于传统C++,这是对象的指针;在托管世界中,它是对象的引用。不管怎样,它包含了一个对象实例的地址,我们将使用术语对象实例(ObjectInstance)描述对象引用指向地址位置的数据结构。

5 SimpleProgram的栈结构和堆

一般GC堆上的smallObj对象实例包含一个名为_largeObj的字节数组(注意,图中显示的大小为85016字节,是实际的存贮大小)。CLR对大于或等于85000字节的对象的处理和小对象不同。大对象在大对象堆(LOH)上分配,而小对象在一般GC堆上创建,这样可以优化对象的分配和回收。LOH不会压缩,而GC堆在GC回收时进行压缩。还有,LOH只会在完全GC回收时被回收。

smallObj的对象实例包含类型句柄(TypeHandle),指向对应类型的方法表。每个声明的类型有一个方法表,而同一类型的所有对象实例都指向同一个方法表。它包含了类型的特性信息(接口,抽象类,具体类,COM封装和代理),实现的接口数目,用于接口分派的接口图,方法表的槽(slot)数目,指向相应实现的槽表。

方法表指向一个名为EEClass的重要数据结构。在方法表创建前,CLR类加载器从元数据中创建EEClass。图4中,SmallClass的方法表指向它的EEClass。这些结构指向它们的模块和程序集。方法表和EEClass一般分配在共享域的加载器堆。加载器堆和应用程序域关联,这里提到的数据结构一旦被加载到其中,就直到应用程序域卸载时才会消失。而且,默认的应用程序域不会被卸载,所以这些代码的生存期是直到CLR关闭为止。

对象实例

正如我们说过的,所有值类型的实例或者包含在线程栈上,或者包含在GC堆上。所有的引用类型在GC堆或者LOH上创建。图6显示了一个典型的对象布局。一个对象可以通过以下途径被引用:基于栈的局部变量,在交互操作或者平台调用情况下的句柄表,寄存器(执行方法时的this指针和方法参数),拥有终结器(finalizer)方法的对象的终结器队列。OBJECTREF不是指向对象实例的开始位置,而是有一个DWORD的偏移量(4字节)。此DWORD称为对象头,保存一个指向SyncTableEntry表的索引(从1开始计数的syncblk编号。因为通过索引进行连接,所以在需要增加表的大小时,CLR可以在内存中移动这个表。SyncTableEntry维护一个反向的弱引用,以便CLR可以跟踪SyncBlock的所有权。弱引用让GC可以在没有其它强引用存在时回收对象。SyncTableEntry还保存了一个指向SyncBlock的指针,包含了很少需要被一个对象的所有实例使用的有用的信息。这些信息包括对象锁,哈希编码,任何转换层(thunking)数据和应用程序域的索引。对于大多数的对象实例,不会为实际的SyncBlock分配内存,而且syncblk编号为0。这一点在执行线程遇到如lock(obj)或者obj.GetHashCode的语句时会发生变化,如下所示:

SmallClass obj = new SmallClass()
// Do some work here
lock(obj) { /* Do some synchronized work here */ }
obj.GetHashCode();

在以上代码中,smallObj会使用0作为它的起始的syncblk编号。lock语句使得CLR创建一个syncblk入口并使用相应的数值更新对象头。因为C#的lock关键字会扩展为try-finally语句并使用Monitor类,一个用作同步的Monitor对象在syncblk上创建。堆GetHashCode的调用会使用对象的哈希编码增加syncblk。

在SyncBlock中有其它的域,它们在COM交互操作和封送委托(marshaling delegates)到非托管代码时使用,不过这和典型的对象用处无关。

类型句柄紧跟在对象实例中的syncblk编号后。为了保持连续性,我会在说明实例变量后讨论类型句柄。实例域(Instance field)的变量列表紧跟在类型句柄后。默认情况下,实例域会以内存最有效使用的方式排列,这样只需要最少的用作对齐的填充字节。图7的代码显示了SimpleClass包含有一些不同大小的实例变量。

图8显示了在Visual Studio调试器的内存窗口中的一个SimpleClass对象实例。我们在图7的return语句处设置了断点,然后使用ECX寄存器保存的simpleObj地址在内存窗口显示对象实例。前4个字节是syncblk编号。因为我们没有用任何同步代码使用此实例(也没有访问它的哈希编码),syncblk编号为0。保存在栈变量的对象实例,指向起始位置的4个字节的偏移处。字节变量b1,b2,b3和b4被一个接一个的排列在一起。两个short类型变量s1和s2也被排列在一起。字符串变量str是一个4字节的OBJECTREF,指向GC堆中分配的实际的字符串实例。字符串是一个特别的类型,因为所有包含同样文字符号的字符串,会在程序集加载到进程时指向一个全局字符串表的同一实例。这个过程称为字符串驻留(string interning),设计目的是优化内存的使用。我们之前已经提过,在NET Framework 1.1中,程序集不能选择是否使用这个过程,尽管未来版本的CLR可能会提供这样的能力。

所以默认情况下,成员变量在源代码中的词典顺序没有在内存中保持。在交互操作的情况下,词典顺序必须被保存到内存中,这时可以使用StructLayoutAttribute特性,它有一个LayoutKind的枚举类型作为参数。LayoutKind.Sequential可以为被封送(marshaled)数据保持词典顺序,尽管在.NET Framework 1.1中,它没有影响托管的布局(但是.NET Framework 2.0可能会这么做)。在交互操作的情况下,如果你确实需要额外的填充字节和显示的控制域的顺序,LayoutKind.Explicit可以和域层次的FieldOffset特性一起使用。

看完底层的内存内容后,我们使用SOS看看对象实例。一个有用的命令是DumpHeap,它可以列出所有的堆内容和一个特别类型的所有实例。无需依赖寄存器,DumpHeap可以显示我们创建的唯一一个实例的地址。

!DumpHeap -type SimpleClass
Loaded Son of Strike data table version 5 from
"C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\mscorwks.dll"
 Address       MT     Size
00a8197c 00955124       36
Last good object: 00a819a0
total 1 objects
Statistics:
      MT    Count TotalSize Class Name
  955124        1        36 SimpleClass

对象的总大小是36字节,不管字符串多大,SimpleClass的实例只包含一个DWORD的对象引用。SimpleClass的实例变量只占用28字节,其它8个字节包括类型句柄(4字节)和syncblk编号(4字节)。找到simpleObj实例的地址后,我们可以使用DumpObj命令输出它的内容,如下所示:

!DumpObj 0x00a8197c
Name: SimpleClass
MethodTable 0x00955124
EEClass 0x02ca33b0
Size 36(0x24) bytes
FieldDesc*: 00955064
      MT    Field   Offset                 Type       Attr    Value Name
00955124  400000a        4         System.Int64   instance      31 l1
00955124  400000b        c                CLASS   instance 00a819a0 str
    << some fields omitted from the display for brevity >>
00955124  4000003       1e          System.Byte   instance        3 b3
00955124  4000004       1f          System.Byte   instance        4 b4

正如之前说过,C#编译器对于类的默认布局使用LayoutType.Auto(对于结构使用LayoutType.Sequential);因此类加载器重新排列实例域以最小化填充字节。我们可以使用ObjSize来输出包含被str实例占用的空间,如下所示:

!ObjSize 0x00a8197c
sizeof(00a8197c) =       72 (    0x48) bytes (SimpleClass)

如果你从对象图的全局大小(72字节)减去SimpleClass的大小(36字节),就可以得到str的大小,即36字节。让我们输出str实例来验证这个结果:

!DumpObj 0x00a819a0
Name: System.String
MethodTable 0x009742d8
EEClass 0x02c4c6c4
Size 36(0x24) bytes

如果你将字符串实例的大小(36字节)加上SimpleClass实例的大小(36字节),就可以得到ObjSize命令报告的总大小72字节。

请注意ObjSize不包含syncblk结构占用的内存。而且,在.NET Framework 1.1中,CLR不知道非托管资源占用的内存,如GDI对象,COM对象,文件句柄等等;因此它们不会被这个命令报告。

指向方法表的类型句柄在syncblk编号后分配。在对象实例创建前,CLR查看加载类型,如果没有找到,则进行加载,获得方法表地址,创建对象实例,然后把类型句柄值追加到对象实例中。JIT编译器产生的代码在进行方法分派时使用类型句柄来定位方法表。CLR在需要史可以通过方法表反向访问加载类型时使用类型句柄。

方法表

每个类和实例在加载到应用程序域时,会在内存中通过方法表来表示。这是在对象的第一个实例创建前的类加载活动的结果。对象实例表示的是状态,而方法表表示了行为。通过EEClass,方法表把对象实例绑定到被语言编译器产生的映射到内存的元数据结构(metadata structures)。方法表包含的信息和外挂的信息可以通过System.Type访问。指向方法表的指针在托管代码中可以通过Type.RuntimeTypeHandle属性获得。对象实例包含的类型句柄指向方法表开始位置的偏移处,偏移量默认情况下是12字节,包含了GC信息。我们不打算在这里对其进行讨论。

图9显示了方法表的典型布局。我们会说明类型句柄的一些重要的域,但是对于完全的列表,请参看此图。让我们从基实例大小(Base Instance Size)开始,因为它直接关系到运行时的内存状态。

基实例大小

基实例大小是由类加载器计算的对象的大小,基于代码中声明的域。之前已经讨论过,当前GC的实现需要一个最少12字节的对象实例。如果一个类没有定义任何实例域,它至少包含额外的4个字节。其它的8个字节被对象头(可能包含syncblk编号)和类型句柄占用。再说一次,对象的大小会受到StructLayoutAttribute的影响。

看看图3中显示的MyClass(有两个接口)的方法表的内存快照(Visual Studio .NET 2003内存窗口),将它和SOS的输出进行比较。在图9中,对象大小位于4字节的偏移处,值为12(0x0000000C)字节。以下是SOS的DumpHeap命令的输出:

!DumpHeap -type MyClass
 Address       MT     Size
00a819ac 009552a0       12
total 1 objects
Statistics:
    MT  Count TotalSize Class Name
9552a0      1        12    MyClass

方法槽表(Method Slot Table)

在方法表中包含了一个槽表,指向各个方法的描述(MethodDesc),提供了类型的行为能力。方法槽表是基于方法实现的线性链表,按照如下顺序排列:继承的虚方法,引入的虚方法,实例方法,静态方法。

类加载器在当前类,父类和接口的元数据中遍历,然后创建方法表。在排列过程中,它替换所有的被覆盖的虚方法和被隐藏的父类方法,创建新的槽,在需要时复制槽。槽复制是必需的,它可以让每个接口有自己的最小的vtable。但是被复制的槽指向相同的物理实现。MyClass包含接口方法,一个类构造函数(.cctor)和对象构造函数(.ctor)。对象构造函数由C#编译器为所有没有显式定义构造函数的对象自动生成。因为我们定义并初始化了一个静态变量,编译器会生成一个类构造函数。10显示了MyClass的方法表的布局。布局显示了10个方法,因为Method2槽为接口IVMap进行了复制,下面我们会进行讨论。图11显示了MyClass的方法表的SOS的输出。

任何类型的开始4个方法总是ToString, Equals, GetHashCode, and Finalize。这些是从System.Object继承的虚方法。Method2槽被进行了复制,但是都指向相同的方法描述。代码显示定义的.cctor和.ctor会分别和静态方法和实例方法分在一组。

方法描述(MethodDesc)

方法描述(MethodDesc)是CLR知道的方法实现的一个封装。有几种类型的方法描述,除了用于托管实现,分别用于不同的交互操作实现的调用。在本文中,我们只考察图3代码中的托管方法描述。方法描述在类加载过程中产生,初始化为指向IL。每个方法描述带有一个预编译代理(PreJitStub),负责触发JIT编译。图12显示了一个典型的布局,方法表的槽实际上指向代理,而不是实际的方法描述数据结构。对于实际的方法描述,这是-5字节的偏移,是每个方法的8个附加字节的一部分。这5个字节包含了调用预编译代理程序的指令。5字节的偏移可以从SOS的DumpMT输出从看到,因为方法描述总是方法槽表指向的位置后面的5个字节。在第一次调用时,会调用JIT编译程序。在编译完成后,包含调用指令的5个字节会被跳转到JIT编译后的x86代码的无条件跳转指令覆盖。

12 方法描述

对图12的方法表槽指向的代码进行反汇编,显示了对预编译代理的调用。以下是在Method2被JIT编译前的反汇编的简化显示。

!u 0x00955263
Unmanaged code
00955263 call        003C3538        ;call to the jitted Method2()
00955268 add         eax,68040000h   ;ignore this and the rest
                                     ;as !u thinks it as code

现在我们执行此方法,然后反汇编相同的地址:

!u 0x00955263
Unmanaged code
00955263 jmp     02C633E8        ;call to the jitted Method2()
00955268 add     eax,0E8040000h  ;ignore this and the rest
                                 ;as !u thinks it as code

在此地址,只有开始5个字节是代码,剩余字节包含了Method2的方法描述的数据。“!u”命令不知道这一点,所以生成的是错乱的代码,你可以忽略5个字节后的所有东西。

CodeOrIL在JIT编译前包含IL中方法实现的相对虚地址(Relative Virtual Address ,RVA)。此域用作标志,表示是否IL。在按要求编译后,CLR使用编译后的代码地址更新此域。让我们从列出的函数中选择一个,然后用DumpMT命令分别输出在JIT编译前后的方法描述的内容:

!DumpMD 0x00955268
Method Name : [DEFAULT] [hasThis] Void MyClass.Method2()
MethodTable 9552a0
Module: 164008
mdToken: 06000006
Flags : 400
IL RVA : 00002068

编译后,方法描述的内容如下:

!DumpMD 0x00955268
Method Name : [DEFAULT] [hasThis] Void MyClass.Method2()
MethodTable 9552a0
Module: 164008
mdToken: 06000006
Flags : 400
Method VA : 02c633e8

方法的这个标志域的编码包含了方法的类型,例如静态,实例,接口方法或者COM实现。让我们看方法表另外一个复杂的方面:接口实现。它封装了布局过程所有的复杂性,让托管环境觉得这一点看起来简单。然后,我们将说明接口如何进行布局和基于接口的方法分派的确切工作方式。

接口虚表图和接口图

在方法表的第12字节偏移处是一个重要的指针,接口虚表(IVMap)。如图9所示,接口虚表指向一个应用程序域层次的映射表,该表以进程层次的接口ID作为索引。接口ID在接口类型第一次加载时创建。每个接口的实现都在接口虚表中有一个记录。如果MyInterface1被两个类实现,在接口虚表表中就有两个记录。该记录会反向指向MyClass方法表内含的子表的开始位置,如图9所示。这是接口方法分派发生时使用的引用。接口虚表是基于方法表内含的接口图信息创建,接口图在方法表布局过程中基于类的元数据创建。一旦类型加载完成,只有接口虚表用于方法分派。

第28字节位置的接口图会指向内含在方法表中的接口信息记录。在这种情况下,对MyClass实现的两个接口中的每一个都有两条记录。第一条接口信息记录的开始4个字节指向MyInterface1的类型句柄(见图9图10)。接着的WORD(2字节)被一个标志占用(0表示从父类派生,1表示由当前类实现)。在标志后的WORD是一个开始槽(Start Slot),被类加载器用来布局接口实现的子表。对于MyInterface2,开始槽的值为4(从0开始编号),所以槽5和6指向实现;对于MyInterface2,开始槽的值为6,所以槽7和8指向实现。类加载器会在需要时复制槽来产生这样的效果:每个接口有自己的实现,然而物理映射到同样的方法描述。在MyClass中,MyInterface1.Method2和MyInterface2.Method2会指向相同的实现。

基于接口的方法分派通过接口虚表进行,而直接的方法分派通过保存在各个槽的方法描述地址进行。如之前提及,.NET框架使用fastcall的调用约定,最先2个参数在可能的时候一般通过ECX和EDX寄存器传递。实例方法的第一个参数总是this指针,所以通过ECX寄存器传送,可以在“mov ecx,esi”语句看到这一点:

mi1.Method1();
mov    ecx,edi                 ;move "this" pointer into ecx
mov    eax,dword ptr [ecx]     ;move "TypeHandle" into eax
mov    eax,dword ptr [eax+0Ch] ;move IVMap address into eax at offset 12
mov    eax,dword ptr [eax+30h] ;move the ifc impl start slot into eax
call   dword ptr [eax]         ;call Method1
mc.Method1();
mov    ecx,esi                 ;move "this" pointer into ecx
cmp    dword ptr [ecx],ecx     ;compare and set flags
call   dword ptr ds:[009552D8h];directly call Method1

这些反汇编显示了直接调用MyClass的实例方法没有使用偏移。JIT编译器把方法描述的地址直接写到代码中。基于接口的分派通过接口虚表发生,和直接分派相比需要一些额外的指令。一个指令用来获得接口虚表的地址,另一个获取方法槽表中的接口实现的开始槽。而且,把一个对象实例转换为接口只需要拷贝this指针到目标的变量。在图2中,语句“mi1=mc”使用一个指令把mc的对象引用拷贝到mi1。

虚分派(Virtual Dispatch)

现在我们看看虚分派,并且和基于接口的分派进行比较。以下是图3中MyClass.Method3的虚函数调用的反汇编代码:

mc.Method3();
Mov    ecx,esi               ;move "this" pointer into ecx
Mov    eax,dword ptr [ecx]   ;acquire the MethodTable address
Call   dword ptr [eax+44h]   ;dispatch to the method at offset 0x44

虚分派总是通过一个固定的槽编号发生,和方法表指针在特定的类(类型)实现层次无关。在方法表布局时,类加载器用覆盖的子类的实现代替父类的实现。结果,对父对象的方法调用被分派到子对象的实现。反汇编显示了分派通过8号槽发生,可以在调试器的内存窗口(如图10所示)和DumpMT的输出看到这一点。

静态变量

静态变量是方法表数据结构的重要组成部分。作为方法表的一部分,它们分配在方法表的槽数组后。所有的原始静态类型是内联的,而对于结构和引用的类型的静态值对象,通在句柄表中创建的对象引用来指向。方法表中的对象引用指向应用程序域的句柄表的对象引用,它引用了堆上创建的对象实例。一旦创建后,句柄表内的对象引用会使堆上的对象实例保持生存,直到应用程序域被卸载。在图9 中,静态字符串变量str指向句柄表的对象引用,后者指向GC堆上的MyString。

EEClass

EEClass在方法表创建前开始生存,它和方法表结合起来,是类型声明的CLR版本。实际上,EEClass和方法表逻辑上是一个数据结构(它们一起表示一个类型),只不过因为使用频度的不同而被分开。经常使用的域放在方法表,而不经常使用的域在EEClass中。这样,需要被JIT编译函数使用的信息(如名字,域和偏移)在EEClass中,但是运行时需要的信息(如虚表槽和GC信息)在方法表中。

对每一个类型会加载一个EEClass到应用程序域中,包括接口,类,抽象类,数组和结构。每个EEClass是一个被执行引擎跟踪的树的节点。CLR使用这个网络在EEClass结构中浏览,其目的包括类加载,方法表布局,类型验证和类型转换。EEClass的子-父关系基于继承层次建立,而父-子关系基于接口层次和类加载顺序的结合。在执行托管代码的过程中,新的EEClass节点被加入,节点的关系被补充,新的关系被建立。在网络中,相邻的EEClass还有一个水平的关系。EEClass有三个域用于管理被加载类型的节点关系:父类(Parent Class),相邻链(sibling chain)和子链(children chain)。关于图4中的MyClass上下文中的EEClass的语义,请参考图13。

13只显示了和这个讨论相关的一些域。因为我们忽略了布局中的一些域,我们没有在图中确切显示偏移。EEClass有一个间接的对于方法表的引用。EEClass也指向在默认应用程序域的高频堆分配的方法描述块。在方法表创建时,对进程堆上分配的域描述列表的一个引用提供了域的布局信息。EEClass在应用程序域的低频堆分配,这样操作系统可以更好的进行内存分页管理,因此减少了工作集。

13 EEClass 布局

图13中的其它域在MyClass(图3)的上下文的意义不言自明。我们现在看看使用SOS输出的EEClass的真正的物理内存。在mc.Method1代码行设置断点后,运行图3的程序。首先使用命令Name2EE获得MyClass的EEClass的地址。

!Name2EE C:\Working\test\ClrInternals\Sample1.exe MyClass
MethodTable: 009552a0
EEClass: 02ca3508
Name: MyClass

Name2EE的第一个参数时模块名,可以从DumpDomain命令得到。现在我们得到了EEClass的地址,我们输出EEClass:

!DumpClass 02ca3508
Class Name : MyClass, mdToken : 02000004, Parent Class : 02c4c3e4
ClassLoader : 00163ad8, Method Table : 009552a0, Vtable Slots : 8
Total Method Slots : a, NumInstanceFields: 0,
NumStaticFields: 2,FieldDesc*: 00955224
      MT    Field   Offset  Type           Attr    Value    Name
009552a0  4000001   2c      CLASS          static 00a8198c  str
009552a0  4000002   30      System.UInt32  static aaaaaaaa  ui

13和DumpClass的输出看起来完全一样。元数据令牌(metadata token,mdToken)表示了在模块PE文件中映射到内存的元数据表的MyClass索引,父类指向System.Object。从相邻链指向名为Program的EEClass,可以知道图13显示的是加载Program时的结果。

MyClass有8个虚表槽(可以被虚分派的方法)。即使Method1和Method2不是虚方法,它们可以在通过接口进行分派时被认为是虚函数并加入到列表中。把.cctor和.ctor加入到列表中,你会得到总共10个方法。最后列出的是类的两个静态域。MyClass没有实例域。其它域不言自明。

Conclusion结论

我们关于CLR一些最重要的内在的探索旅程终于结束了。显然,还有许多问题需要涉及,而且需要在更深的层次上讨论,但是我们希望这可以帮助你看到事物如何工作。这里提供的许多的信息可能会在.NET框架和CLR的后来版本中改变,不过尽管本文提到的CLR数据结构可能改变,概念应该保持不变。

Hanu Kommalapati是微软Gulf Coast区(休斯顿)的一名架构师。他在微软现在的角色是帮助客户基于.NET框架建立可扩展的组件框架。可以通过hanuk@microsoft.com联系他。

Tom Christian是微软开发支持高级工程师,使用ASP.NET和用于WinDBG的.NET调试器扩展(sos/ psscor)。他在北卡罗来州的夏洛特,可以通过tomchris@microsoft.com联系他。

Way to Lambda

Table of Contents

Introduction

Lambda expressions are a powerful way to make code more dynamic, easier to extend and also faster (see this article if you want to know: why!). They can be also used to reduce potential errors and make use of static typing and intellisense as well as the superior IDE of Visual Studio.

Lambda expressions have been introduced with the .NET-Framework 3.5 and C# 3 and have played an important part together with technologies like LINQ or a lot of the techniques behind ASP.NET MVC. If you think about the implementation of various controls in ASP.NET MVC you’ll find out that most of the magic is actually covered by using lambda expressions. Using one of the Html extension method together with a lambda expression will make use of the model you have actually created in the background.

In this article I’ll try to cover the following things:

  • A brief introduction – what are lambda expressions exactly and why do they differ from anonymous methods (which we had before!)
  • A closer look at the performance of lambda expressions – are there scenarios where we gain or lose performance against standard methods
  • A really close look – how are lambda expressions handled in MSIL code
  • A few patterns from the JavaScript world ported to C#
  • Scenarios where lambda expressions excel – either performance-wise or out of pure comfort
  • Some new patterns that I’ve come up with (maybe someone else did also come up with those – but that has been behind my knowledge)

So if you expect a beginner’s tutorial here I will probably disappoint you, unless you are a really advanced and smart beginner. Needless to say I am not such a guy, which is why I want to warn you: for this article you’ll need some advanced knowledge of C# and should know your way around this language.

What you can expect is an article that tries to explain some things. The article will also investigate some (at least for me) interesting questions. In the end I will present some practical examples and patterns that can be used on some occasions. I’ve found out that lambda expressions can simplify so many scenarios that writing down explicit patterns could be useful.

Background – What are lambda expressions?

In the first version of C# the construct of delegates has been introduced. This concept has been integrated to make passing functions possible. In a sense a delegate is a strongly typed (and managed) function pointer. A delegate can be much more (of course), but in essance that is what you get out. The problem was that passing a function required quite a lot of steps (usually):

  1. Writing the delegate (like a class), which includes specifying the return and argument types.
  2. Using the delegate as the type in the method that should receive some function with the signature that is described by the delegate.
  3. Creating an instance of the delegate with the specific function to be passed by this delegate type.

If this sounds complicated to you – it should be, because essentially it was (well, its not rocket science, but a lot more code than you would expect). Therefore step number 3 is usually not required and the C# compiler does the delegate creation for you. Still step 1 and 2 are mendatory!

Luckily C# 2 came with generics. Now we could write generic classes, methods and more important: generic delegates! However, it took until the .NET-Framework 3.5 until somebody at Microsoft realized that there are actually just 2 generic delegates (with some “overloads”) required to cover 99% of the delegate use-cases:

  • Action without any input arguments (no input and no output) and the generic overloads
  • Action<T1, ..., T16>, which take 1 to 16 types as parameters (no output), as well as
  • Func<T1, ..., T16, Tout>, which take 0 to 16 types as input parameters and 1 output parameter

While Action (and the corresponding generics) does return void (i.e. this is really just an action, which executes something), Func actually returns something which is of the last type that is specified. With those 2 delegates (and their overloads) we can really skip the first step in most times. Step 2 is still required, but just uses Action and Func.

So what if I just want to run some code? This issue has been attacked in C# 2. In this version you could create delegate funtctions, which are anonymous functions. However, the syntax never got popular. A very simple example of such an anonymous method looks like the following:

Collapse | Copy Code
Func<double, double> square = delegate (double x) {
	return x * x;
}

So let’s improve this syntax and extend the possibilities. Welcome to lambda expression country! First of all where does this name come from? The name is actually derived from the lambda calculus in mathematics, which basically just states what is really required to express a function. More precisely it is a formal system in mathematical logic for expressing computation by way of variable binding and substitution. So basically we have between 0 and N input arguments and one return value. In our programming language we can also have no return value (void).

Let’s have a look at some example lambda expressions:

Collapse | Copy Code
//The compiler can resolve this, which makes calls like dummyLambda(); possible
var dummyLambda = () => { Console.WriteLine("Hallo World from a Lambda expression!"); };

//Can be used as with double y = square(25);
Func<double, double> square = x => x * x;

//Can be used as with double z = product(9, 5);
Func<double, double, double> product = (x, y) => x * y;

//Can be used as with printProduct(9, 5);
Action<double, double> printProduct = (x, y) => { Console.Writeline(x * y); };

//Can be used as with var sum = dotProduct(new double[] { 1, 2, 3 }, new double[] { 4, 5, 6 });
Func<double[], double[], double> dotProduct = (x, y) => {
	var dim = Math.Min(x.Length, y.Length);
	var sum = 0.0;
	for(var i = 0; i != dim; i++)
		sum += x[i] + y[i];
	return sum;
};

//Can be used as with var result = matrixVectorProductAsync(...);
Func<double[,], double[], double[]> matrixVectorProductAsync = async (x, y) => {
	var sum = 0.0;
	/* do some stuff ... */
	return sum;
};

What we learn directly from those statements:

  • If we have only one argument, then we can omit the round brackets ()
  • If we only have one statement and want to return this, then we can omit the curly brackets {} and skip the return keyword
  • We can state that our lambda expressions can be executed asynchronous – just add the async keyword as with usual methods
  • The var statement cannot be used in most cases – only in very special cases

Needless to say we could use var a lot more often (like always) if we would actually specify the parameter types. This is optional and usually not done (because the types can be resolved from the delegate type that we are using in the assignment), but it is possible. Consider the following examples:

Collapse | Copy Code
var square = (double x) => x * x;

var stringLengthSquare = (string s) => s.Length * s.Length;

var squareAndOutput = (decimal x, string s) => {
	var sqz = x * x;
	Console.WriteLine("Information by {0}: the square of {1} is {2}.", s, x, sqz);
};

Now we know most of the basic stuff, but there are a few more things which are really cool about lambda expressions (and make them SO useful in many cases). First of all consider this code snippet:

Collapse | Copy Code
var a = 5;
var multiplyWith = x => x * a;
var result1 = multiplyWith(10); //50
a = 10;
var result2 = multiplyWith(10); //100

Ah okay! So you can use other variables in the upper scope. That’s not so special you would say. But I say this is much more special than you might think, because those are real captured variables, which makes our lambda expression a so called closure. Consider the following case:

Collapse | Copy Code
void DoSomeStuff()
{
	var coeff = 10;
	var compute = (int x) => coeff * x;
	var modifier = () => {
		coeff = 5;
	};

	var result1 = DoMoreStuff(compute);

	ModifyStuff(modifier);
	s
	var result2 = DoMoreStuff(compute);
}

int DoMoreStuff(Action<int> computer)
{
	return computer(5);
}

void ModifyStuff(Action modifier)
{
	modifier();
}

What’s happening here? First we are creating a local variable and two lambdas in that scope. The first lambda should show that it is also possible to access local variables in other local scopes. This is actually quite impressive already. This means we are protecting a variable but still can access it within the other method. It does not matter if the other method is defined within this or in another class.

The second lambda should demonstrate that a lambda expression is also able to modify the upper scope variables. This means we can actually modify our local variables from other methods, by just passing a lambda that has been created in the corresponding scope. Therfore I consider closures a really mighty concept that (like parallel programming) could lead to unexpected results (similar, but if we follow our code not as unexpected as race conditions in parallel programing). To show one scenario with unexpected results we could do the following:

Collapse | Copy Code
var buttons = new Button[10];

for(var i = 0; i < buttons.Length; i++)
{
	var button = new Button();
	button.Text = (i + 1) + ". Button - Click for Index!";
	button.OnClick += (s, e) => { Messagebox.Show(i.ToString()); };
	buttons[i] = button;
}

//What happens if we click ANY button?!

This is a tricky question that I usually ask my students in my JavaScript lecture. About 95% of the students would instantly say “Button 0 shows 0, Button 1 shows 1, …”. But some students already spot the trick and since the whole part of the lecture is about closures and functions it is obvious that there is a trick. The result is: Every button is showing 10!

The local scoped variable called i has changed its value and must have the value of buttons.Length, because obviously we already left the for-loop. There is an easy way around this mess (in this case). Just do the following with the body of the for-loop:

Collapse | Copy Code
var button = new Button();
var index = i;
button.Text = (i + 1) + ". Button - Click for Index!";
button.OnClick += (s, e) => { Messagebox.Show(index.ToString()); };
buttons[i] = button;

This solves everything, but this variable index is a value type and therefore makes a copy to the more “global” (upper scoped) variable i.

The last topic of this advanced introduction is the possibility of having so called expression trees. This is only possible with lambda expressions and is responsible for the magic that is happening in ASP.NET MVC with the Html extension methods. The key question is: How can the target method find out

  1. what the name of the variable I am passing in is?
  2. what the structure of the body I am using is?
  3. what kind of types I am using within my body?

Now a Expression actually solves this problem. It allows us to dig our way through the compiler generated expression tree. Additionally we can execute the given function as with the usual Func or Action delegates. It also allows us to interpret the lambda expression later (at runtime).

Let’s have a look at an example about how to use the objects of type Expression:

Collapse | Copy Code
Expression<Func<MyModel, int>> expr = model => model.MyProperty;
var member = expr.Body as MemberExpression;
var propertyName = memberExpression.Member.Name; //only execute if member != null  ...

This is the most simple example regarding the usage of such expressions. The principle is quite straight forward: By forming an object of type Expression the compiler generates meta information about the generated parse tree. This parse tree contains all relevant information like parameters (names, types, …) and the method body.

The method body contains the whole parse tree. There we have access to operators, operands as well as complete statements and (most importantly) the return name and type. The name of the return variable could be null as well. However, most of the time one will be interested in expressions like the one above. This is also similar to the way that ASP.NET MVC handles the Expression type – to get the name of the parameter to use. The advantage for the programmer is obviously that he cannot misspell the name of the property, since every misspelling results in a compilation error.

Remark In the scenario where the programmer is just interested in the name of the calling property, there is a much simpler (and more elegant) solution. The special parameter attribute CallerMemberName can be used to get the name of the calling method or property. The field is automatically filled out by the compiler. Therefore if we are just interested in getting to know the name (without more type information etc.), we would just write code like the example method below (which returns the name of the method that just called the WhatsMyName() method).

Collapse | Copy Code
string WhatsMyName([CallerMemberName] string callingName = null)
{
    return callingName;
}

Performance of lambda expressions

A big question is: How fast are lambda expressions? Well, first we expect them to perform about as fast as regular functions, since they are compiler generated as well. In the next section we will see that the MSIL generated for lambda expressions is not that different to regular functions.

One of the most interesting discussions will be if lambda expressions will closures will perform as fast as methods with global variables. The really interesting region will be if the number of available variables in the local scope will matter.

Let’s have a look at the code used for performing some benchmarks. All in all we are having a look at 4 different benchmarks, which should give us enough evidence to see differences between normal functions and lambda expressions.

Collapse | Copy Code
using System;
using System.Collections.Generic;
using System.Diagnostics;

namespace LambdaTests
{
	class StandardBenchmark : Benchmark
    {
		const int LENGTH = 100000;
        static double[] A;
		static double[] B;

        static void Init()
        {
            var r = new Random();
            A = new double[LENGTH];
            B = new double[LENGTH];

            for (var i = 0; i < LENGTH; i++)
            {
                A[i] = r.NextDouble();
                B[i] = r.NextDouble();
            }
        }

        static long LambdaBenchmark()
        {
            Func<double> Perform = () =>
            {
                var sum = 0.0;

                for (var i = 0; i < LENGTH; i++)
                    sum += A[i] * B[i];

                return sum;
            };
            var iterations = new double[100];
            var timing = new Stopwatch();
            timing.Start();

            for (var j = 0; j < iterations.Length; j++)
                iterations[j] = Perform();

            timing.Stop();
            Console.WriteLine("Time for Lambda-Benchmark: \t {0}ms", timing.ElapsedMilliseconds);
            return timing.ElapsedMilliseconds;
        }

        static long NormalBenchmark()
        {
            var iterations = new double[100];
            var timing = new Stopwatch();
            timing.Start();

            for (var j = 0; j < iterations.Length; j++)
                iterations[j] = NormalPerform();

            timing.Stop();
            Console.WriteLine("Time for Normal-Benchmark: \t {0}ms", timing.ElapsedMilliseconds);
            return timing.ElapsedMilliseconds;
        }

        static double NormalPerform()
        {
            var sum = 0.0;

            for (var i = 0; i < LENGTH; i++)
                sum += A[i] * B[i];

            return sum;
        }
    }
}

We could write this code much better using lambda expressions (which then take the measurement of an arbitrary method that is passed using the callback pattern, as we will find out). The reason for not doing this is to not spoil the final result. So here we are with essentially three methods. One that is called for the lambda test and one that is called for normal test. The third methods is then invoked within the normal test. The missing fourth methods is our lambda expression, which will be created in the first method. The computation does not matter, we just pick random numbers to avoid any compiler optimizations in this area. In the end we are just interested in the difference between normal methods and lambda expressions.

If we run those benchmarks we will see that lambda expressions do usually not perform worse than usual methods. One surprise might be that lambda expressions actually can actually perform slightly better than usual functions. However, this is certainly not true in the case of having closures, i.e. captures variables. This just means that one should not hesitate to use lambda expressions regularly. But we should think carefully about the performance losses we might get when using closures. In such scenarios we will usually lose a little bit of performance, which might still be quite OK. The loss is created for several reasons as we will explore in the next section.

The plain data for our benchmarks is shown in table below:

Test Lambda [ms] Normal [ms]
0 45+-1 46+-1
1 44+-1 46+-2
2 49+-3 45+-2
3 48+-2 45+-2

The plots corresponding to this data are displayed below. We can see that usual functions and lambda expressions are performing within the same limits, i.e. there is no performance loss when using lambda expressions.

Behind the curtain – MSIL

Using the famous tool LINQPad we can have a close look at the MSIL without any burden. A screenshot of investigating the IL by using LINQPad is shown below.

We will have a look at three examples. Let’s start off with the first one. The lambda expression looks like:

Collapse | Copy Code
Action<string> DoSomethingLambda = (s) =>
{
	Console.WriteLine(s);// + local
};

The corresponding method has the following code:

Collapse | Copy Code
void DoSomethingNormal(string s)
{
	Console.WriteLine(s);
}

Those two codes result in the following two snippets of MSIL code:

Collapse | Copy Code
DoSomethingNormal:
IL_0000:  nop
IL_0001:  ldarg.1
IL_0002:  call        System.Console.WriteLine
IL_0007:  nop
IL_0008:  ret
<Main>b__0:
IL_0000:  nop
IL_0001:  ldarg.0
IL_0002:  call        System.Console.WriteLine
IL_0007:  nop
IL_0008:  ret

The big difference here is the naming and usage of the method, not the declaration. The declaration is actually the same. The compiler creates a new method in the local class and inferes the usage of this method. This is nothing new – it is just a matter of convinience that we can use lambda expressions like this. From the MSIL view we are doing the same in both cases; namely invoking a method within the current object.

We could put this observation into a little diagram to illustrate the modification done by the compiler. In the picture below we see that the compiler actually moves the lambda expression to become a fixed method.

The second example shows the real magic of lambda expressions. In this example we are either using a (normal) method with global variables or a lambda expressions with captured variables. The code reads as follows:

Collapse | Copy Code
void Main()
{
	int local = 5;

	Action<string> DoSomethingLambda = (s) => {
		Console.WriteLine(s + local);
	};

	global = local;

	DoSomethingLambda("Test 1");
	DoSomethingNormal("Test 2");
}

int global;

void DoSomethingNormal(string s)
{
	Console.WriteLine(s + global);
}

Now there is nothing unusual here. The key question is: How are lambda expressions resolved from the compiler?

Collapse | Copy Code
IL_0000:  newobj      UserQuery+<>c__DisplayClass1..ctor
IL_0005:  stloc.1
IL_0006:  nop
IL_0007:  ldloc.1
IL_0008:  ldc.i4.5
IL_0009:  stfld       UserQuery+<>c__DisplayClass1.local
IL_000E:  ldloc.1
IL_000F:  ldftn       UserQuery+<>c__DisplayClass1.<Main>b__0
IL_0015:  newobj      System.Action<System.String>..ctor
IL_001A:  stloc.0
IL_001B:  ldarg.0
IL_001C:  ldloc.1
IL_001D:  ldfld       UserQuery+<>c__DisplayClass1.local
IL_0022:  stfld       UserQuery.global
IL_0027:  ldloc.0
IL_0028:  ldstr       "Test 1"
IL_002D:  callvirt    System.Action<System.String>.Invoke
IL_0032:  nop
IL_0033:  ldarg.0
IL_0034:  ldstr       "Test 2"
IL_0039:  call        UserQuery.DoSomethingNormal
IL_003E:  nop         

DoSomethingNormal:
IL_0000:  nop
IL_0001:  ldarg.1
IL_0002:  ldarg.0
IL_0003:  ldfld       UserQuery.global
IL_0008:  box         System.Int32
IL_000D:  call        System.String.Concat
IL_0012:  call        System.Console.WriteLine
IL_0017:  nop
IL_0018:  ret         

<>c__DisplayClass1.<Main>b__0:
IL_0000:  nop
IL_0001:  ldarg.1
IL_0002:  ldarg.0
IL_0003:  ldfld       UserQuery+<>c__DisplayClass1.local
IL_0008:  box         System.Int32
IL_000D:  call        System.String.Concat
IL_0012:  call        System.Console.WriteLine
IL_0017:  nop
IL_0018:  ret         

<>c__DisplayClass1..ctor:
IL_0000:  ldarg.0
IL_0001:  call        System.Object..ctor
IL_0006:  ret

Again both functions are equal from the statements they call. The same mechanism has been applied again, namely the compiler generated a name for the function and placed it somewhere in the code. The big difference now is that the compiler also generated a class, where the compiler generated function (our lambda expression) has been placed in. An instance of this class is generated in the function, where we are (originally) creating the lambda expression. What’s the purpose of this class? It gives a global scope to the variables, which have been used as captured variables previously. With this trick, the lambda expression has access to the local scoped variables (because from the MSIL perspective, they are just global variables sitting in a class instance).

All variables are therefore assigned and read from the instance of the freshly generated class. This solves the problem of having references between variables (there has just to be one additional reference to the class – but that’s it!). The compiler is also smart enough to just place those variables in the class, which have been used as captured variables. Therefore we could have expected to have no performance issues when using lambda expressions. However, a warning is required that this behavior can enhance memory leaks due to still referenced lambda expressions. As lang as the function lives, the scope is still alive as well (this should have been obvious before – but now we do see the reason!).

Like before we will also put this into some nice little diagram. Here we see that in the case of closures not only the method is moved, but also the captured variables. All the moved objects will then be placed in a compiler generated class. Therefore we end up with instantiating a new object from a yet unknown class.

Porting some popular JavaScript patterns

One of the advantages of using (or knowing) JavaScript is the superior usage of functions. In JavaScript functions are just objects and can have properties assigned to them as well. In C# we cannot do everything that we can do in JavaScript, but we can do some things. One of the reasons for this is that JavaScript gives scope to variables within functions. Therefore one has to create (mostly anonymous) functions to localize variables. In C# we create scopes by using blocks, i.e. using curly brackets.

Of course in a way, functions do also give scope in C#. By using a lambda expression we are required to use curly brackets (i.e. create a new scope) for creating a variable within a lambda expression. However, additionally we can also create scopes locally.

Let’s have a look at some of the most useful JavaScript patterns that are now possible in C# by using lambda expressions.

Callback Pattern

This pattern is an old one. Actually the callback pattern has been used since the first version of the .NET-Framework, but in a slightly different way. Now the deal is that lambda expression can be used as closures, i.e. capturing local variables, which is an interesting feature that allows us to write code like the following:

Collapse | Copy Code
void CreateTextBox()
{
	var tb = new TextBox();
	tb.IsReadOnly = true;
	tb.Text = "Please wait ...";
	DoSomeStuff(() => {
		tb.Text = string.Empty;
		tb.IsReadOnly = false;
	});
}

void DoSomeStuff(Action callback)
{
	// Do some stuff - asynchronous would be helpful ...
	callback();
}

This whole pattern is nothing new for people who are coming from JavaScript. Here we usually tend to use this pattern a lot, since it is really useful and since we can use the parameter as event handler for AJAX related events (oncompleted, or onsuccess etc.), as well as other helpers. If you are using LINQ, then you also use part of the callback pattern, since for example the LINQ where will callback your query in every iteration. This is just one example when callback functions are useful. In the .NET-world usually events are the preferred way of doing events (as the name suggests), which is something like a callback on steroids. The reasons for this are two-fold, having a special keyword and type-pattern (2 parameters: sender and arguments, where sender is usually of type object (most general type) and arguments inherits from EventArgs), as well as having the opportunity to more than just one method to be invoked by using the += (add) and -= (remove) operators.

Returning Functions

As with usual functions, lambda expressions can also return a function pointer (delegate instance). This means that we can use a lambda expression to create and return a lambda expression (or just a delegate instance to an already defined method). There are plenty of scenarios where such a behavior might be helpful. First let’s have a look at some example code:

Collapse | Copy Code
Func<string, string> SayMyName(string language)
{
	switch(language.ToLower())
	{
		case "fr":
			return name => {
				return "Je m'appelle " + name + ".";
			};
		case "de":
			return name => {
				return "Mein Name ist " + name + ".";
			};
		default:
			return name => {
				return "My name is " + name + ".";
			};
	}
}

void Main()
{
	var lang = "de";
	//Get language - e.g. by current OS settings
	var smn = SayMyName(lang);
	var name = Console.ReadLine();
	var sentence = smn(name);
	Console.WriteLine(sentence);
}

The code could have been shorter in this case. We could have also avoided a default return value by just throwing an exception if the requested language has not been found. However, for illustration purposes this example should show that this is kind of a function factory. Another way to do this would be involving a Hashtable or the even better (due to static typing) Dictionary<K, V> type.

Collapse | Copy Code
static class Translations
{
	static readonly Dictionary<string, Func<string, string>> smnFunctions = new Dictionary<string, Func<string, string>>();

	static Translations()
	{
		smnFunctions.Add("fr", name => "Je m'appelle " + name + ".");
		smnFunctions.Add("de", name => "Mein Name ist " + name + ".");
		smnFunctions.Add("en", name => "My name is " + name + ".");
	}

	public static Func<string, string> GetSayMyName(string language)
	{
		//Check if the language is available has been omitted on purpose
		return smnFunctions[language];
	}
}

//Now it is sufficient to call Translations.GetSayMyName("de") to get the function with the German translation.

Even though this seems like over-engineered it might be the best way to do such function factories. After all this way is very easy to extend and can be used in a lot of scenarios. This pattern in combination with reflection can make most programming codes a lot more flexible, easier to maintain and more robust to extend. How such a pattern works is shown in the next picture.

Self-Defining Functions

The self-defining function pattern is a common trick in JavaScript and could be used to gain performance (and reliability) in any code. The main idea behind this pattern is that a function that has been set as a property (i.e. we only have a function pointer set on a variable) can be exchanged with another function very easily. Let’s have a look what that means exactly:

Collapse | Copy Code
class SomeClass
{
	public Func<int> NextPrime
	{
		get;
		private set;
	}

	int prime;

	public SomeClass
	{
		NextPrime = () => {
			prime = 2;

			NextPrime = () => {
				//Algorithm to determine next - starting at prime
				//Set prime
				return prime;
			};

			return prime;
		}
	}
}

What is done here? Well, in the first case we just get the first prime number, which is 2. Since this has been trivial, we can adjust our algorithm to exclude all even numbers by default. This will certainly speed up our algorithm, but we will still get 2 as the starting prime number. We will not have to see if we already performed a query on the NextPrime() function, since the function defines itself once the trivial case (2) has been returned. This way we save resources and can optimize our algorithm in the more interesting region (all numbers, which are greater than 2).

We already see that this can be used to gain performance as well. Let’s consider the following example:

Collapse | Copy Code
Action<int> loopBody = i => {
	if(i == 1000)
		loopBody = /* set to the body for the rest of the operations */;

	/* body for the first 1000 iterations */
};

for(int j = 0; j < 10000000; j++)
	loopBody(j);

Here we basically just have two distinct regions – one for the first 1000 iterations and another for the 9999000 remaining iterations. Usually we would need a condition to differ between the two. This would be unnecessary overhead in most cases, which is why we use a self-defining function to change itself after the smaller region has been executed.

Immediately-Invoked Function Expression

In JavaScript immediately-invoked function expressions (so called IIFEs) are quite common. The reason for this is that unlike in C# curly brackets do not give scope to form new local variables. Therefore one would pollute the global (that is mostly the window object) object with variables. This is unwanted due to many reasons.

The solution is quite simple: While curly brackets do not give scope, functions do. Therefore variables defined within any function are restricted to this function (and its children). Since usually JavaScript users want those functions to be executed directly it would be a waste of variables and statement lines to first assign them a name and then execute them. Another reason for that this execution is required only once.

In C# we can easily write such functions as well. Here we also do get a new scope, but this should not be our main focus, since we can easily create a new scope anywhere we want to. Let’s have a look at some example code:

Collapse | Copy Code
(() => {
	// Do Something here!
})();

This code can be resolved easily. However, if we want to do something with parameters, then we will need to specify their types. Let’s have an example of something that passes some arguments to the IIFE.

Collapse | Copy Code
((string s, int no) => {
	// Do Something here!
})("Example", 8);

This seems like too many lines for gaining nothing. However, we could combine this pattern to use the async keyword. Let’s view an example:

Collapse | Copy Code
await (async (string s, int no) => {
	// Do Something here async using Tasks!
})("Example", 8);

//Continue here after the task has been finished

Now there might be one or the other usage as an async-wrapper or similar.

Immediate Object Initialization

Quite close related is the immediate object initialization. The reason why I am including this pattern in an article about lambda expressions is that anonymous objects are quite powerful as they can contain more than just simple types. One thing that they could include are also lambda expressions. This is why there is something that can be discussed in the area of lambda expressions.

Collapse | Copy Code
//Create anonymous object
var person = new {
	Name = "Florian",
	Age = 28,
	Ask = (string question) => {
		Console.WriteLine("The answer to `" + question + "` is certainly 42!");
	}
};

//Execute function
person.Ask("Why are you doing this?");

If you want to run this pattern, then you will most probably see an exception (at least I am seeing one). The mysterious reason is that lambda expressions cannot be assigned to anonymous objects. If that does not make sense to you, then we are sitting in the same boat. Luckily for us everything the compiler wants to tell us is: “Dude I do not know what kind of delegate I should create for this lambda expression!”. In this case it is easy to help the compiler. Just use the following code instead:

Collapse | Copy Code
var person = new {
	Name = "Florian",
	Age = 28,
	Ask = (Action<string>)((string question) => {
		Console.WriteLine("The answer to `" + question + "` is certainly 42!");
	})
};

One of the questions that certainly arises is: In what scope does the function (in this case Ask) live? The answer is that it lives in the scope of the class that creates the anonymous object or in its own scope if it uses captured variables. Therefore the compiler still creates an anonymous object (which involves laying out the meta information for a compiler-generated class, instantiating a new object with the class information behind and using it), but is just setting the property Ask with the delegate object that refers to the position of our created lambda expression.

Caution You should avoid using this pattern when you actually want to access any of the properties of the anonymous object inside any of the lambda expressions you are directly setting to the anonymous object. The reason is the following: The C# compiler requires every object to be declared before you can actually use them. In this case the usage would be certainly after the declaration; but how should the compiler know? From his point of view the access is simultaneous with the declaration, hence the variable person has not been declared yet.

There is one way out of this hell (actually there are more ways, but in my opinion this is the most elegant…). Consider the following code:

Collapse | Copy Code
dynamic person = null;
person = new {
	Name = "Florian",
	Age = 28,
	Ask = (Action<string>)((string question) => {
		Console.WriteLine("The answer to `" + question + "` is certainly 42! My age is " + person.Age + ".");
	})
};

//Execute function
person.Ask("Why are you doing this?");

Now we declare it before. We could have done the same thing by stating that person is of type object, but in this case we would require reflection (or some nice wrappers) to access the properties of the anonymous object. In this case we are relying on the DLR, which results in the nicest wrapper available for such things. Now the code is very JavaScript-ish and I do not knnow if this is a good thing or not … (that’s why there is a caution for this remark!).

Init-Time Branching

This pattern is actually quite closely related to the self-defining function. The only difference is, that in this case the function is not defining itself, but other functions. This is obviously only possible, if the other functions are not defined in a classic way, but over properties (i.e. member variables).

The pattern is also known under the name load-time branching and is essentially an optimization pattern. This pattern has been created to avoid permanent usage of switch-case or if-else etc. control structures. So in a way one could say that this pattern is creating roads to connect certain branches of the code permanently.

Let’s consider the following example:

Collapse | Copy Code
public Action AutoSave { get; private set; }

public void ReadSettings(Settings settings)
{
	/* Read some settings of the user */

	if(settings.EnableAutoSave)
		AutoSave = () => { /* Perform Auto Save */ };
	else
		AutoSave = () => { }; //Just do nothing!
}

Here we are doing two things. First we have one method to read out the users settings (handling some arbitrary Settings class). If we find that the user has enabled the auto saving, then we set the full code to the property. Otherwise we are just placing a dummy method on this location. Therefore we can always just call the AutoSave() property and invoke it – we will always do what has been set. There is no need to check the settings again or something similar. We also do not need to save this one particular setting in a boolean variable, since the corresponding function has been set dynamically.

One might think that this is not a huge performance gain, but this is just one small example. In a very complex code this could actually save some time – especially if the scenarios are getting more complex and when the dynamically set methods will be called within (huge) loops.

Also (and I consider this the main reason) this code is probably easier to maintain (if one knows about this pattern) and easier to read. Instead of unnecessery control sequences one can focus on what’s important: calling the auto save routine for instance.

In JavaScript such load-time branching pattern has been used the most in combination with feature (or browser) detection. Not to mention that browser detection is in fact evil and should not be done on any website, feature detection is indeed quite useful and is used best in combination with this pattern. This is also the way that (as an example) jQuery detects the right object to use for AJAX requests. Once it spots the XMLHttpRequest object within the browser, there is no chance that the underlyling browser will change in the middle of our script execution resulting in the need to deal with an ActiveX object.

Scenarios in which lambdas are super useful

Some of the patterns are more applicable than others. One really useful pattern is the self-defining function expression for initializing parts of some objects. Let’s consider the following example:

We want to create an object that is able of performing some kind of lazy loading. This means that even though the object has been properly instantiated, we did not load all the required resources. One reason to avoid this is due to a massive IO operation (like a network transfer over the Internet) for obtaining the required data. We want to make sure that the data is as fresh as possible, when we start working with the data. Now there are certain ways to do this, and the most efficient would certainly be the way that the Entity Framework has solved this lazy loading scenario with LINQ. Here IQueryable<T> only stores the queries without having the underlying data. Once we require a result, not only the constructed query is executed, but the query is executed in the most efficient form, e.g. as an SQL query on the remote database server.

In our scenario we just want to differ between the two states. First we query, then everything should be prepared and queries should be performed on the loaded data.

Collapse | Copy Code
class LazyLoad
{
	public LazyLoad()
	{
		Search = query => {
			var source = Database.SearchQuery(query);

			Search = subquery => {
				var filtered = source.Filter(subquery);

				foreach(var result in filtered)
					yield return result;
			};

			foreach(var result in source)
				yield return result;
		};
	}

	public Func<string, IEnumerable<ResultObject>> Search { get; private set; }
}

So we basically have two different kind of methods to be set here. The first one will pull the data out of the Database (or whatever this static class is doing), while the second one will filter the data that has been pulled out from the database. Once we have our result we will basically just work with the set of results from this first query. Of course one could also imagine to built in another method to reset the behavior of this class or other methods that would be useful for a productive code.

Another example is the init-time branching. Assume that we have an object that has one method called Perform(). This method will be used to invoke some code. This object that contains this method could be initialized (i.e.constructed) in three different ways:

  1. By passing the function to invoke (direct).
  2. By passing some object which contains the function to invoke (indirect).
  3. Or by passing the information of the first case in a serialized form.

Now we could save all those three states (along with the complete information given) as global variables. The invocation of the Perform() method would now have to look at the current state (either saved in an enumeration variable, or due to comparisons with null) and then determine the right way to be invoked. Finally the invocation could begin.

A much better way is to have the Perform() method as a property. This property can only be set within the object and is a delegate type. Now we can set the property directly in the corresponding constructor. Therefore we can omit the global variables and do not have to worry about in which way the object has been constructed. This performs better and has the advantage of being fixed, once constructed (as it should be).

A little bit of example code regarding this scenario:

Collapse | Copy Code
class Example
{
	public Action<object> Perform { get; private set; }

	public Example(Action<object> methodToBeInvoked)
	{
		Perform = methodToBeInvoked;
	}

	//The interface is arbitrary as well
	public Example(IHaveThatFunction mother)
	{
		//The passed object must have the method we are interested in
		Perform = mother.TheCorrespondingFunction;
	}

	public Example(string methodSource)
	{
		//The Compile method is arbitrary and not part of .NET or C#
		Perform = Compile(methodSource);
	}
}

Even though this example seems to be constructed (pun intended) it can applied quite often, however, mostly with just the first two possible calls. Interesting scenarios rise in the topics of domain specific languages (DSL), compilers, to logging frameworks, data access layers and many many more. Usually there are many ways to finish the task, but a carefully and well-thought lambda expression might be the most elegant solution.

Thinking about one scenario where one would certainly benefit from having an immediately invoked function expression is in the area of functional programming. However, without going to deep into this topic I’ll show another way to use IIFE in C#. The scenario I am showing is also a common one, but it will certainly not being used that often (and I believe that this is really OK that way, that it is not used in such scenarios).

Collapse | Copy Code
Func<double, double> myfunc;
var firstValue = (myfunc = (x) => {
	return 2.0 * x * x - 0.5 * x;
})(1);
var secondValue = myfunc(2);
//...

One can also use immediately invoked functions to prevent that certain (non-static) methods will be invoked more than once. This is then a combination of self-defining functions with init-time branching and IIFE.

Some new lambda focused design patterns

This section will introduce some patterns I’ve come up with that have lambda expressions in their core. I do not think that all of them are completely new, but at least I have not seen anyone putting a name tag on them. So I decided that I’ll try to come up with some names that might be good or not (it will be a matter of taste). At least the names I’ll pick try to be descriptive. I will also give a judgement if this pattern is useful, powerful or dangerous. To say something in advance: Most pattern are quite powerful, but might introduce potential bugs in your code. So handle with care!

Polymorphism completely in your hands

Lambda expressions can be used to create something like polymorphism (override) without using abstract or virtual (that does not mean that you cannot use those keywords). Consider the following code snippet:

Collapse | Copy Code
class MyBaseClass
{
	public Action SomeAction { get; protected set; }

	public MyBaseClass()
	{
		SomeAction = () => {
			//Do something!
		};
	}
}

Now nothing new here. We are creating a class, which is publishing a function (a lambda expression) over a property. This is again quite JavaScript-ish. The interesting part is that not only this class has the control to change the function that is exposed by the property, but also children of this class. Take a look at this code snippet:

Collapse | Copy Code
class MyInheritedClass : MyBaseClass
{
	public MyInheritedClass
	{
		SomeAction = () => {
			//Do something different!
		};
	}
}

Aha! So we could actually just change the method (or the method that is set to the property to be more accurate) by abusing the protected access modifier. The disadvantage of this method is of course that we cannot directly access the parent’s implementation. Here we are lacking the powers of base, since the base’s property has the same value. If one really need’s something like that, then I suggest the following *pattern*:

Collapse | Copy Code
class MyBaseClass
{
	public Action SomeAction { get; private set; }

	Stack<Action> previousActions;

	protected void AddSomeAction(Action newMethod)
	{
		previousActions.Push(SomeAction);
		SomeAction = newMethod;
	}

	protected void RemoveSomeAction()
	{
		if(previousActions.Count == 0)
			return;

		SomeAction = previousActions.Pop();
	}

	public MyBaseClass()
	{
		previousActions = new Stack<Action>();

		SomeAction = () => {
			//Do something!
		};
	}
}

In this case the children have to go over the method AddSomeAction() to override the current set method. This method will then just push the currently set method to the stack of previous methods enabling us to restore any previous state.

My name for this pattern is Lambda Property Polymorphism Pattern (or short LP3). It basically describes the possibility of encapsulting any function in a property, which then can be set by derivatives of the base class. The stack is just an addition to this pattern, which does not change the patterns goal to use a property as the point of interaction.

Why this pattern? Well, there are several reasons. To start with: Because we can! But wait, this pattern can actually become quite handy if you start to use quite different kinds of properties. Suddenly the word “polymorphism” becomes a complete new meaning. But this will be a different pattern… Now I just want to point out that this pattern can in reality do things that have been thought to be impossible.

An example: You want (it is not recommended, but it would be the most elegant solution for your problem) to override a static method. Well, inheritence is not possible with static methods. The reason for this is quite simple: Inheritence just applies to instances, whereas static members are not bound to an instance. They are the same for all instances. This also implies a warning. The following pattern might not have the outcome you want to have, so only use it when you know what you are doing!

Here’s some example code:

Collapse | Copy Code
void Main()
{
	var mother = HotDaughter.Activator().Message;
	//mother = "I am the mother"
	var create = new HotDaughter();
	var daughter = HotDaughter.Activator().Message;
	//daughter = "I am the daughter"
}

class CoolMother
{
	public static Func<CoolMother> Activator { get; protected set; }

	//We are only doing this to avoid NULL references!
	static CoolMother()
	{
		Activator = () => new CoolMother();
	}

	public CoolMother()
	{
		//Message of every mother
		Message = "I am the mother";
	}

	public string Message { get; protected set; }
}

class HotDaughter : CoolMother
{
	public HotDaughter()
	{
		//Once this constructor has been "touched" we set the Activator ...
		Activator = () => new HotDaughter();
		//Message of every daughter
		Message = "I am the daughter";
	}
}

This is only a very simple and hopefully not totally misleading example. The things can become very complex in such a pattern, which is why I would always want to avoid it. Nevertheless it is possible (and it is also possible to construct all those static properties and functions in such a way, that you are still always getting the one in which you are interested in). A good solution regarding static polymorphism (yes, it is possible!) is not easy and requires some coding and should only be done if it really solves your problem without any additional headaches.

More to come …

This section will be updated with more patterns the next few days… So stay tuned!

Using the code

I’ve compiled a collection of some of the samples and made a list of the benchmarks. I’ve collected everything in a console project – so it should basically run on every platform (I mean Mono, .NET, Silverlight, … you name it!) that supports C# up to version 3. My recommendation is that one should first try around with LINQPad. Most of the sample code here can be compiled directly within LINQPad. Some examples are very abstract and cannot be compiled without creating a proper scenario as described.

Nevertheless I hope that the code demonstrates some of the features I’ve mentioned in this article. I also hope that lambda expressions become as strongly used as interfaces are being used nowadays. Thinking back some years interfaces seemed like totally over-engineered with not so much use at all. Nowadays everyone’s just talking about interfaces – “where’s the implementation?” one might ask… Lambda expressions are so useful that the greatest extensions make them do work as they should. Could you imagine programming in C# without LINQ, ASP.NET MVC, Reactive Extensions, Tasks … (your favorite framework?) the way you know and enjoy it?

Points of Interest

When I first saw the syntax for lambda expressions I somehow got frightend a bit. The syntax seemed complicated and not very useful. Now I completely reverted my opinion. I think the syntax is actually quite amazing (especially compared to the syntax that is present in C++11, but this is just a matter of taste). I also think that lambda expressions are a crucial part of the whole C# language.

Without this language feature I doubt that C# would have created such nice possibilites like ASP.NET MVC, lots of the MVVM frameworks, … and not to mention LINQ! Of course all those technologies would have been possible as well, but not in such a clear and nicely useable way.

A personal note at the end. It’s been one year that I am actively contributing to the CodeProject! This is my 16th article (this is great since I like integer powers of 2) and I am happy that so many people find some of my articles helpful. I hope that all of you will appreciate what is about to come in 2013, where I will probably focus on creating a bridge between C# and JavaScript (I leave it open to you to imagine what I mean by that – and no: its not one of those seen C# to JavaScript or MSIL to JavaScript transpilers).

That being said: I wish everyone a merry christmas and a happy new year 2013!

History

  • v1.0.0 | Initial Release | 12.12.2012
  • v1.1.0 | Added LP3 pattern | 14.12.2012

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

from:http://www.codeproject.com/Articles/507985/Way-to-Lambda

中文整理版:http://www.cnblogs.com/gaochundong/archive/2013/08/05/way_to_lambda.html

C# Language Features, From C# 2.0 to 4.0

Contents

Introduction

This article discusses the language features introduced in C# 2.0, 3.0, and 4.0. The purpose of writing this article is to have a single repository of  all the new language features introduced over the last seven years and to illustrate (where applicable) the advantages of the new features. It is  not intended to be a comprehensive discussion of each feature, for that I have links for further reading. The impetus for this article is mainly because  I could not find a single repository that does what this article does. In fact, I couldn’t even find a Microsoft webpage that describes them.  Instead, I had to rely on the universal authority for everything, Wikipedia, which has a couple  nice tables on the matter.

C# 2.0 Features

Generics

First off, generics are not like C++ templates. They primarily provide for strongly typed collections.

Without Generics

Collapse | Copy Code
public void WithoutGenerics()
{
  ArrayList list = new ArrayList();

  // ArrayList is of type object, therefore essentially untyped.
  // Results in boxing and unboxing of value types
  // Results in ability to mix types which is bad practice.
  list.Add(1);
  list.Add("foo");
}

Without generics, we incur a “boxing” penalty because lists are of type “object”, and furthermore, we can quite easily add incompatible types to a list.

With Generics

Collapse | Copy Code
public void WithGenerics()
{
  // Generics provide for strongly typed collections.
  List<int> list = new List<int>();
  list.Add(1); // allowed
  // list.Add("foo"); // not allowed
}

With generics we are prevented from using a typed collection with an incompatible type.

Constraints and Method Parameters and Return Types

Generics can also be used in non-collection scenarios, such as enforcing the type of a parameter or return value. For example, here we create a generic method (the reason we don’t create  a generic MyVector will be discussed in a minute:

Collapse | Copy Code
public class MyVector
{
    public int X { get; set; }
    public int Y { get; set; }
}

class Program
{
    public static T AddVector<T>(T a, T b)
      where T : MyVector, new()
    {
      T newVector = new T();
      newVector.X = a.X + b.X;
      newVector.Y = a.Y + b.Y;

      return newVector;
    }

    static void Main(string[] args)
    {
     MyVector a = new MyVector();
     a.X = 1;
     a.Y = 2;
     MyVector b = new MyVector();
     b.X = 10;
     b.Y = 11;
     MyVector c = AddVector(a, b);
     Console.WriteLine(c.X + ", " + c.Y);
   }
}

Notice the constraint. Read more about constraints here.  The constraint is telling the compiler that the generic parameter must be of type MyVector, and that it is an object (the “new()“) constraint,  rather than a value type. The above code is not very helpful because it would require writing an “AddVector” method for vectors of different types (int, double, float, etc.)

What we can’t do with generics (but could with C++ templates) is perform operator functions on generic types. For example, we can’t do this:

Collapse | Copy Code
public class MyVector<T>
{
  public T X { get; set; }
  public T Y { get; set; }

  // Doesn't work:
  public void AddVector<T>(MyVector<T> v)
  {
    X = X + v.X;
    Y = Y + v.Y;
  }
}

This results in a “operator ‘+=’ cannot be applied to operands of type ‘T’ and ‘T'” error! More on workarounds for this later.

Factories

You might see generics used in factories. For example:

Collapse | Copy Code
public static T Create<T>() where T : new()
{
  return new T();
}

The above is a very silly thing to do, but if you are writing an Inversion of Control layer, you might be doing some complicated things (like loading  assemblies) based on the type the factory needs to create.

Partial Types

Partial types can be used on classes, structs, and interface. In my opinion, partial types were created to separate out tool generated code from  manually written code. For example, the Visual Studio form designer generates the code-behind for the UI layout, and to keep this code stable and  independent from your manually written code, such as the event handlers, Visual  Studio creates two separate files and indicates that the same class is of partial type. For example, let’s say we have two separate files:

File 1:

Collapse | Copy Code
public partial class MyPartial
{
  public int Foo { get; set; }
}

File 2:

Collapse | Copy Code
public partial class MyPartial
{
  public int Bar { get; set; }
}

We can use the class, which has been defined in two separate files:

Collapse | Copy Code
public class PartialExample
{
  public MyPartial foobar;

  public PartialExample()
  {
    foobar.Foo = 1;
    foobar.Bar = 2;
  }
}

Do not use partial classes to implement a model-view-controller pattern! Just because you can separate the code into different files, one  for the model, one for the view, and one view the controller, does not mean you are implementing the MVC pattern correctly!

The old way of handling tool generated code was typically to put comments in the code like:

Collapse | Copy Code
// Begin Tool Generated Code: DO NOT TOUCH
   ... code ...
// End Tool Generated Code

And the tool would place its code between the comments.

Anonymous Methods

Read more.

Anonymous methods let us define the functionality of a delegate (such as an event) inline rather than as a separate method.

The Old Way

Before anonymous delegates, we would have to write a separate method for the delegate implementation:

Collapse | Copy Code
public class Holloween
{
  public event EventHandler ScareMe;

  public void OldBoo()
  {
    ScareMe+=new EventHandler(DoIt);
  }

  public void Boo()
  {
    ScareMe(this, EventArgs.Empty);
  }

  public void DoIt(object sender, EventArgs args)
  {
    Console.WriteLine("Boo!");
  }
}

The New Way

With anonymous methods, we can implement the behavior inline:

Collapse | Copy Code
public void NewBoo()
{
  ScareMe += delegate(object sender, EventArgs args) { Console.WriteLine("Boo!"); };
}

Async Tasks

We can do the same thing with the Thread class:

Collapse | Copy Code
public void AsyncBoo()
{
  new Thread(delegate() { Console.WriteLine("Boo!"); }).Start();
}

Note that we cast the method as a “delegate()”–note the ‘()’–because there are two delegate forms and we have to specify the parameterless delegate form.

Updating the UI

My favorite example is calling the main application thread from a worker thread to update a UI component:

Collapse | Copy Code
/// <summary>
/// Called from some async process:
/// </summary>
public void ApplicationThreadBoo()
{
  myForm.Invoke((MethodInvoker)delegate { textBox.Text = "Boo"; });
}

Iterators

Read more.

Iterators reduce the amount of code we have to write to iterate over a custom collection.

The Old Way

Previous to C# 2.0, we had to implement the IEnumerator interface, supplying the Current, MoveNext, and Reset operations manually:

Collapse | Copy Code
public class DaysOfWeekOld : IEnumerable
{
  protected string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday",
                                             "Friday", "Saturday", "Sunday" };

  public int Count { get { return days.Length; } }
  public string this[int idx] { get { return days[idx]; } }

  public IEnumerator GetEnumerator()
  {
    return new DaysOfWeekEnumerator(this);
  }
}

public class DaysOfWeekEnumerator : IEnumerator
{
  protected DaysOfWeekOld dow;
  protected int pos = -1;

  public DaysOfWeekEnumerator(DaysOfWeekOld dow)
  {
    this.dow = dow;
  }

  public object Current
  {
    get { return dow[pos]; }
  }

  public bool MoveNext()
  {
    ++pos;

    return (pos < dow.Count);
  }

  public void Reset()
  {
    pos = -1;
  }
}

The New Way

In the new approach, we can use the yield keyword to iterate through the collection:

Collapse | Copy Code
public class DaysOfWeekNew : IEnumerable
{
  protected string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday",
                                            "Friday", "Saturday", "Sunday" };

  public IEnumerator GetEnumerator()
  {
    for (int i = 0; i < days.Length; i++)
    {
      yield return days[i];
    }
  }
}

This is much more readable and also ensures that we don’t access elements in the collection beyond the number of items in the collection.

We can also implement a generic enumerator, which provides a type safe iterator, but requires us to implement both generic and non-generic GetEnumerator method:

Collapse | Copy Code
public class DaysOfWeekNewGeneric : IEnumerable<string>
{
  protected string[] days = new string[] { "Monday", "Tuesday", "Wednesday", "Thursday",
                                            "Friday", "Saturday", "Sunday" };

  IEnumerator IEnumerable.GetEnumerator()
  {
    return Enumerate();
  }

  public IEnumerator<int> GetEnumerator()
  {
    return Enumerate();
  }

  public IEnumerator<string> Enumerate()
  {
    for (int i = 0; i < days.Length; i++)
    {
      yield return days[i];
    }
  }
}

So, for example, in the non-generic version, I could write:

Collapse | Copy Code
DaysOfWeekNew dow2 = new DaysOfWeekNew();

foreach (string day in dow2)
{
  Console.WriteLine(day);
}

which is perfectly valid, but I could also write:

Collapse | Copy Code
DaysOfWeekNew dow2 = new DaysOfWeekNew();

foreach (int day in dow2)
{
  Console.WriteLine(day);
}

The error in casting from a string to an integer is caught at runtime, not compile time. Using a generic IEnumerable<T>,  an improper cast is caught at compile time and also by the IDE:

Collapse | Copy Code
DaysOfWeekNewGeneric dow3 = new DaysOfWeekNewGeneric();

foreach (int day in dow3)
{
  Console.WriteLine(day);
}

The above code is invalid and generates the compiler error:

“error CS0030: Cannot convert type ‘string’ to ‘int'”

Thus, the implementation of generic iterators in C# 2.0 increases readability and type safety when using iterators.

Nullable Types

Read more.

Nullable types allow a value type to take on an additional “value”, being “null”. I’ve found this primarily useful when working with data tables. For example:

Collapse | Copy Code
public class Record
{
  public int ID { get; set; }
  public string Name { get; set; }
  public int? ParentID { get; set; } 
}

public class NullableTypes
{
  protected DataTable people;

  public NullableTypes()
  {
    people = new DataTable();

    // Note that I am mixing a C# 3.0 feature here, Object Initializers,
    // with regards to how AllowDBNull is initialized. I'm doing because I think
    // the example is more readable, even though not C# 2.0 compilable.

    people.Columns.Add(new DataColumn("ID", typeof(int)) {AllowDBNull=false});
    people.Columns.Add(new DataColumn("Name", typeof(string)) { AllowDBNull = false });
    people.Columns.Add(new DataColumn("ParentID", typeof(int)) { AllowDBNull = true });

    DataRow row = people.NewRow();
    row["ID"] = 1;
    row["Name"] = "Marc";
    row["ParentID"] = DBNull.Value; // Marc does not have a parent!
    people.Rows.Add(row);
  }

  public Record GetRecord(int idx)
  {
    return new Record()
    {
      ID = people.Rows[idx].Field<int>("ID"),
      Name = people.Rows[idx].Field<string>("Name"),
      ParentID = people.Rows[idx].Field<int?>("ParentID"),
    };
  }
}

In the above example, the Field extension method (I’ll discuss extension methods later) converts DBNull.Value automatically to a “null“, which in this  schema is a valid foreign key value.

You will also see nullable types used in various third party frameworks to represent “no value.”  For example, in the DevExpress framework, a checkbox  can be set to false, true, or no value. The reason for this is again to support mapping a control directly to a structure that backs a table with  nullable fields. That said, I think you would most likely see nullable types in ORM implementations.

Private Setters (properties)

Read more.

A private setter exposes a property as read-only, which is different from designating the property as readonly. With a field designated as readonly,  it can only be initialized during construction or in the variable initializer. With a private setter, the property can be exposed as readonly to the  outside world the class implementing the property can still write to it:

Collapse | Copy Code
public class PrivateSetter
{
  public int readable;
  public readonly int readable2;

  public int Readable
  {
    get { return readable; }
    // Accessible only by this class.
    private set { readable = value; }
  }

  public int Readable2
  {
    get { return readable2; }
    // what would the setter do here?
  }

  public PrivateSetter()
  {
    // readonly fields can be initialized in the constructor.
    readable2 = 20;
  }

  public void Update()
  {
    // Allowed:
    Readable = 10;
    // Not allowed:
    // readable2 = 30;
  }
}

Contrast the above implementation with C# 3.0’s auto-implemented properties, which I discuss below.

Method Group Conversions (delegates)

I must admit to a “what the heck is this?” experience for this feature. First (for my education) a “method group” is a set of methods of the same name.  In other words, a method with multiple overloads. This post was very helpful. I stumbled across this post that explained method group conversion with delegates. This  also appears to have to do with covariance and contravariance, features of C# 4.0. Read more here. But let’s try the basic concept,  which is to assign a method to a delegate without having to use “new” (even though behind the scenes, that’s apparently what the IL is emitting).

The Old Way

Collapse | Copy Code
public class MethodGroupConversion
{
  public delegate string ChangeString(string str);
  public ChangeString StringOperation;

  public MethodGroupConversion()
  {
    StringOperation = new ChangeString(AddSpaces);
  }

  public string Go(string str)
  {
    return StringOperation(str);
  }

  protected string AddSpaces(string str)
  {
    return str + " ";
  }
}

The New Way

We replace the constructor with a more straightforward assignment:

Collapse | Copy Code
public MethodGroupConversion()
{
  StringOperation = AddSpaces;
}

OK, that seems simple enough.

C# 3.0 Features

Implicitly Typed Local Variables

Read more.

The “var” keyword is a new feature of C# 3.0. Using the “var” keyword, you are relying on the compiler to infer the variable type  rather than explicitly defining it. So, for example, instead of:

Collapse | Copy Code
public void Example1()
{
  // old:
  Dictionary<string, int> explicitDict = new Dictionary<string, int>();

  // new:
  var implicitDict = new Dictionary<string, int>();
}

While it seems like syntactical sugar, the real strength of implicit types is its use in conjunction with anonymous types (see below.)

Restrictions

Note the phrase “local variables” in the heading for this section. Implicitly typed variables cannot be passed to other methods as parameters nor  returned by methods. As Richard Deeming commented below, what I mean by this is that you cannot specify var as a parameter or return type, but you  can call a method with an implicit type of the method’s parameter is an explicit type, and similarly (and more obviously) with return parameters — an explicit return  type can be  assigned to a var.

Object and Collection Initializers

Read more.

The Old Way

Previously, to initialize property values from outside of the class, we would have to write either use a constructor:

Collapse | Copy Code
public Record(int id, string name, int? parentID)
{
  ID = id;
  Name = name;
  ParentID = parentID;
}
...
new Record(1, "Marc", null);

or initialize the properties separately:

Collapse | Copy Code
Record rec=new Record();
rec.ID = 1;
rec.Name = "Marc";
rec.ParentID = null;

The New Way

In its explicit implementation, this simply allow us to initialize properties and collections when we create the object. We’ve already seen examples in the code above:

Collapse | Copy Code
Record r = new Record() {ID = 1, Name = "Marc", ParentID = 3};

More interestingly is how this feature is used to initialize anonymous types (see below) especially with LINQ.

Initializing Collections

Similarly, a collection can be initialized inline:

Collapse | Copy Code
List<Record> records = new List<Record>()
{
  new Record(1, "Marc", null),
  new Record(2, "Ian", 1),
};

Auto-Implemented Properties

In the C# 2.0 section, I described the private setter for properties. Let’s look at the same implementation using auto-implemented properties:

Collapse | Copy Code
public class AutoImplement
{
  public int Readable { get; private set; }
  public int Readable2 { get { return 20; } }

  public void Update()
  {
    // Allowed:
    Readable = 10;
    // Not allowed:
    // Readable2 = 30;
  }
}

The code is a lot cleaner, but the disadvantage is that, for properties that need to fire events or have some other business logic or validation associated  with them, you have to go back to the old way of implementing the backing field manually. One proposed solution to firing property change events for  auto-implemented properties is to use AOP techniques, as written up by Tamir Khason’s Code Project technical blog.

Anonymous Types

Read more.

Anonymous types lets us create “structures” without defining a backing class or struct, and rely on implicit types (vars) and object initializers.  For example, if we have a collection of “Record” objects, we can return a subset of the properties in this LINQ statement:

Collapse | Copy Code
public void Example()
{
  List<Record> records = new List<Record>();
    {
      new Record(1, "Marc", null),
      new Record(2, "Ian", 1),
    };

  var idAndName = from r in records select new { r.ID, r.Name };
}

Here we see how several features come into play at once:

  • LINQ
  • Implicit types
  • Object initialization
  • Anonymous types

If we run the debugger and inspect “idAndName”, we’ll see that it has a value:

Collapse | Copy Code
{System.Linq.Enumerable.WhereSelectListIterator<CSharpComparison.Record,
          <>f__AnonymousType0<int,string>>}

and (ready for it?) the type:

Collapse | Copy Code
System.Collections.Generic.IEnumerable<<>f__AnonymousType0<int,string>> 
   {System.Linq.Enumerable.WhereSelectListIterator<CSharpComparison.Record,
   <>f__AnonymousType0<int,string>>}

Imagine having to explicitly state that type name. We can see advantages of implicit types, especially in conjunction with anonymous types.

Extension Methods

Read more.

Extension methods are a mechanism for extending the behavior of a class external to its implementation.  For example, the String class is  sealed, so we can’t inherit from it, but there’s a lot of useful functions that the String class doesn’t provide. For example, working with Graphviz, I  often need to put quotes around the object name.

Before Extension Methods

Before extension methods, I would probably end up writing something like this:

Collapse | Copy Code
string graphVizObjectName = "\"" + name +"\"";

Not very readable, re-usable, or bug proof (what if name is null?)

With Extension Methods

With extension methods, I can write an extension:

Collapse | Copy Code
public static class StringHelpersExtensions
{
  public static string Quote(this String src)
  {
    return "\"" + src + "\"";
  }
}

(OK, that part looks pretty much the same) – but I would use it like this:

Collapse | Copy Code
string graphVizObjectName = name.Quote();

Not only is this more readable, but it’s also more reusable, as the behavior is now exposed everywhere.

Query Expressions

Read more.

Query expressions seems to be a synonymous phrase for LINQ (Language-Integrated Query). Humorously, the Microsoft website I just referenced has the header  “LINQ Query Expressions.”  Redundant!

Query expressions are written in a declarative syntax and provide the ability to query an enumerable or “queriable” object using complex filters, ordering,  grouping, and joins, very similar in fact to how you would work with SQL and relational data.

As I wrote about above with regards to anonymous types, here’s a LINQ statement:

Collapse | Copy Code
var idAndName = from r in records select new { r.ID, r.Name };

LINQ expressions can get really complex and working with .NET classes and LINQ relies heavily on extension methods. LINQ is far to large a topic (there are whole books on the subject) and is definitely outside the purview of this article!

Left and Right Joins

Joins by default in LINQ are inner joins. I was perusing recently for how to do left and right joins and came across this useful post.

Lambda Expressions

Read more.

Lambda expressions are a fundamental part of working with LINQ. You usually will not find LINQ without lambda expressions. A lambda expression  is an anonymous method (ah ha!) that “can contain expressions and statements, and can be used to create delegates or expression tree types…The left side of  the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block.” (taken from the website referenced above.)

In LINQ, I could write:

Collapse | Copy Code
var idAndName = from r in records 
  where r.Name=="Marc"
  select new { r.ID, r.Name };

and I’d get the names of people with the name “Marc”. With a lambda expression and the extension methods provided for a generic List, I can write:

Collapse | Copy Code
var idAndName2 = records.All(r => r.Name == "Marc");

LINQ and lambda expressions can be combined. For example, here’s some code from an article I recently wrote:

Collapse | Copy Code
var unassoc = from et in dataSet.Tables["EntityType"].AsEnumerable()
  where !(dataSet.Tables["RelationshipType"].AsEnumerable().Any(
     rt => 
       (rt.Field<int>("EntityATypeID") == assocToAllEntity.ID) && 
       (rt.Field<int>("EntityBTypeID") == et.Field<int>("ID"))))
  select new { Name = et.Field<string>("Name"), ID = et.Field<int>("ID") };

LINQ, lambda expressions, anonymous types, implicit types, collection initializers and object initializers all work together to more concisely express  the intent of the code. Previously, we would have to do this with nested for loops and lots of “if” statements.

Expression Trees

Read more.

Let’s revisit the MyVector example. With expression trees, we can however compile type-specific code at runtime that allows us to work with generic numeric types in a performance efficient manner (compare with “dynamic” in C# 4.0, discussed below).

Collapse | Copy Code
public class MyVector<T>
{
  private static readonly Func<T, T, T> Add;

  // Create and cache adder delegate in the static constructor.
  // Will throw a TypeInitializationException if you can't add Ts or if T + T != T 
  static MyVector()
  {
    var firstOperand = Expression.Parameter(typeof(T), "x");
    var secondOperand = Expression.Parameter(typeof(T), "y");
    var body = Expression.Add(firstOperand, secondOperand);
    Add = Expression.Lambda<Func<T, T, T>>(body, firstOperand, secondOperand).Compile();
  }

  public T X { get; set; }
  public T Y { get; set; }

  public MyVector(T x, T y)
  {
    X = x;
    Y = y;
  }

  public MyVector<T> AddVector(MyVector<T> v)
  {
    return new MyVector<T>(Add(X, v.X), Add(Y, v.Y));
  }
}

The above example comes from a post on StackOverflow.

C# 4.0 Features

Dynamic Binding

Read more.

Let’s revisit the MyVector implementation again. With the dynamic keyword, we can defer the operation to runtime when we know the type.

Collapse | Copy Code
public class MyVector<T>
{
  public MyVector() {}

  public MyVector<T> AddVector(MyVector<T> v)
  {
    return new MyVector<T>()
    {
      X = (dynamic)X + v.X,
      Y = (dynamic)Y + v.Y,
    };
  }
}

Because this uses method invocation and reflection, it is very performance inefficient. According to MSDN referenced in the link above: The  dynamic type simplifies access to COM APIs such as the Office Automation APIs, and also to dynamic APIs such as IronPython libraries, and to the HTML Document Object Model (DOM).

Named and Optional Arguments

Read more.

As with the dynamic keyword, the primary purpose of this is to facilitate calls to COM. From the MSDN link referenced above:

Named arguments enable you to specify an argument for a particular parameter by associating the argument with the parameter’s name rather than with  the parameter’s position in the parameter list. Optional arguments enable you to omit arguments for some parameters. Both techniques can be used with methods,  indexers, constructors, and delegates.

When you use named and optional arguments, the arguments are evaluated in the order in which they appear in the argument list, not the parameter list.

Named and optional parameters, when used together, enable you to supply arguments for only a few parameters from a list of optional parameters.  This capability greatly facilitates calls to COM interfaces such as the Microsoft Office Automation APIs.

I have never used named arguments and I rarely need to use optional arguments, though I remember when I moved from C++ to C#, kicking and screaming  that optional arguments weren’t part of the C# language specification!

Example

We can use named an optional arguments to specifically indicate which arguments we are supplying to a method:

Collapse | Copy Code
public class NamedAndOptionalArgs
{
  public void Foo()
  {
    Bar(a: 1, c: 5);
  }

  public void Bar(int a, int b=1, int c=2)
  {
    // do something.
  }
}

As this example illustrates, we can specify the value for a, use the default value for b, and specify a non-default value for c. While I find named  arguments to be of limited use in regular C# programming, optional arguments are definitely a nice thing to have.

Optional Arguments, The Old Way

Previously, we would have to write something like this:

Collapse | Copy Code
public void OldWay()
{
  BarOld(1);
  BarOld(1, 2);
}

public void BarOld(int a)
{
  // 5 being the default value.
  BarOld(a, 5);
}

public void BarOld(int a, int b)
{
  // do something.
}

The syntax available in C# 4.0 is much cleaner.

Generic Covariance and Contravariance

What do these words even mean? From Wikipedia:

  • covariant: converting from wider to smaller (like double to float)
  • contravariant: converting from narrower to wider (like float to double)

First, let’s look at co-contravariance with delegates, which has been around since Visual Studio 2005.

Delegates

Read more.

Not wanting to restate the excellent “read more” example referenced above, I will simply state that covariance allows us to assign a method returning a  sub-class type to the delegate defined as returning a base class type. This is an example of going from something wider (the base class) to something  smaller (the inherited class) in terms of derivation.

Contravariance, with regards to delegates, lets us create a method in which the argument is the base class and the caller is using a sub-class (going from  narrower to wider). For example, I remember being annoyed that I could not consume an event having a MouseEventArgs argument with a generic event handler  having an EventArgs argument. This example of contravariance has been around since VS2005, but it makes for a useful example of the concept.

Generics

Read more.

Also this excellent technical blog on Code Project.

Again, the MSDN page referenced is an excellent read (in my opinion) on co-contravariance with generics. To briefly summarize: as with delegates, covariance allows  a generic return type to be covariant, being able specify a “wide” return type (more general) but able to use a “smaller” (more specialized) return type.  So, for example, the generic interfaces for enumeration support covariance.

Conversely, contravariance lets us go from something narrow (more specialized, a derived class) to something wider (more general, a base class),  and is used as parameters in generic interfaces such as IComparer.

But How Do I Define My Own?

To specify a covariant return parameter, we use the “out” keyword in the generic type. To specify a contravariant method parameter, we use the “in”  keyword in the generic type. For example (read more here):

Collapse | Copy Code
public delegate T2 MyFunc<in T1,out T2>(T1 t1);

T2 is the covariant return type and T1 is the contravariant method parameter.

A further example is here.

Conclusion

In writing this, I was surprised how much I learned that deepened my understanding of C# as well as getting a broader picture of the arc of the  language’s evolution. This was a really useful exercise!

History

Updated the article based on comments received.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

from:http://www.codeproject.com/Articles/327916/C-Language-Features-From-C-2-0-to-4-0#WithoutGenerics3