Showing posts with label C# 4.0. Show all posts
Showing posts with label C# 4.0. Show all posts

Wednesday, 14 July 2010

C# 4 broke my code! The Pitfalls of COM Interop and Extension methods

A couple of weeks back I got the go-ahead from my boss to upgrade to Visual Studio 2010. What with Microsoft’s legendary commitment to backwards compatibility, and the extended beta period for the 2010 release, I wasn’t expecting any problems. The Upgrade Wizard worked its magic without hiccup, the code compiled, everything seemed fine.

Banana Skin - Image: Chris Sharp / FreeDigitalPhotos.netThen I ran our suite of unit tests. 6145 passed. 15 failed.

When I perused the logs of the failing tests they all had this exception in common:

COMException: Type mismatch. (Exception from HRESULT: 0x80020005 (DISP_E_TYPEMISMATCH))

How could that happen? Nothing should have changed: though I was compiling in VS 2010, I was still targeting .Net 3.5. The runtime behaviour should be identical. Yet it looked like COM Interop itself was broken. Surely not?

A footprint in the flowerbed

I dug deeper. In every case, I was calling an extension method on a COM object – an extension method provided by Microsoft’s VSTO Power Tools, no less.

These Power Tools are painkillers that Microsoft made available for courageous souls programming against the Office Object Model using C#. Up until C# 4, calling a method with optional parameters required copious use of Type.Missing – one use for every optional parameter you didn’t wish to specify. Here’s an example (you might want to shield your eyes):

var workbook = application.Workbooks.Add(Missing.Value);
workbook.SaveAs(
    "Test.xls", 
    Missing.Value, 
    Missing.Value, 
    Missing.Value, 
    Missing.Value, 
    Missing.Value, 
    XlSaveAsAccessMode.xlNoChange, 
    Missing.Value, 
    Missing.Value, 
    Missing.Value, 
    Missing.Value, 
    Missing.Value);

The Power Tools library provides extension methods that hide the optional parameters:

using Microsoft.Office.Interop.Excel.Extensions;

var workbook = application.Workbooks.Add();
workbook.SaveAs(
    new WorkbookSaveAsArgs { FileName = "Test.xls"} );

Can you see what the problem is yet? I couldn’t. So I thought I’d try a work-around.

A big part of C# 4 is playing catch-up with VB.Net improving support for COM Interop. The headline feature is that fancy new dynamic keyword that makes it possible to do late-bound COM calls, but the one that’s relevant here is the support for optional and named parameters. The nice thing is that, whereas using the dynamic keyword requires you to target .Net 4.0, optional and named parameters are a compiler feature, so you can make use of them even if you are targeting .Net 3.5.

That meant that I didn’t need Power Tools any more. In C# 4 I can just rewrite my code like this:

var workbook = application.Workbooks.Add();
workbook.SaveAs( Filename: "Test.xls" );

And that worked. COM Interop clearly wasn’t broken. But why was the code involving extension methods failing?

Looking again over the stack traces of the exceptions in my failed tests I noticed something very interesting. The extension methods didn’t appear. The call was going directly from my method into the COM Interop layer. I was running a Debug build, so I knew that the extension method wasn’t being optimised out.

The big reveal

Then light dawned.

Take a look at the signature of the SaveAs method as revealed by Reflector:

void SaveAs(
   [In, Optional, MarshalAs(UnmanagedType.Struct)] object Filename,
    /* 11 other parameters snipped */)

Notice in particular how Filename is typed as object rather than string. This always happens to optional parameters in COM Interop to allow the Type.Missing value to be passed through if you opt out of supplying a real value.

Now when C# 3.0 was compiling my code and deciding what I meant by

workbook.SaveAs(new WorkbookSaveAsArgs { ... } );

it had to choose between a call to the SaveAs instance method with 12 parameters or a call to the SaveAs extension method with a single parameter. The extension method wins hands down.

But the landscape changes in C# 4.0. Now the compiler knows about optional parameters. It’s as if a whole bunch of overloads have been added to the SaveAs method on Workbook with 0 through to 12 parameters. Which method does the compiler pick this time? Remember that it will always go for an instance method over an extension method. Since new WorkbookSaveAsArgs { … } is a perfectly good object the compiler chooses the SaveAs instance method, completely ignoring the extension method.

All seems well until we hit Run and Excel is handed the WorkbookSaveAsArgs instance by COM Interop. Not knowing how to handle it, it spits it back out in the form of a Type Mismatch exception.

Mystery solved.

The moral

So watch out if your code uses extension methods on COM objects and you’re planning on upgrading to VS 2010. Don’t assume that your code works the same as before just because it compiled OK. Unit tests rule!

Update: Microsoft have confirmed that this is indeed a breaking change that they didn’t spot before shipping.

Monday, 14 June 2010

Project Euler 59: Cracking an XOR code

Code breaking is about as glamorous as it gets for a Geek. So when it comes to Project Euler Problem 59, think 24, with Jack Bauer unscathed amidst a hail of bullets, memory stick in one hand, terrorist’s neck throttled in the other, threatening terrible retribution to the bad guy’s family if the secret goes to the grave with him. Terrorist rasps out,

“There’s a file … cipher1.txt … ASCII format … urrrh … XOR … 3 … letter … password … lower case … all English”

What would Chloe O’Brian make of that when she received the file over a secure socket from Jack’s Smart Phone? Being the bright girl she is, I’ve no doubt that she’d fire up MonoDevelop (only the dumb FBI would use Visual Studio and Windoze) and write the following:

var encryptedBytes = File.ReadAllText(dataPath)
                        .Split(',')
                        .Select(byte.Parse)
                        .ToArray();

That would get her the encrypted data from the file as a sequence of bytes. Now to generate all possible passwords:

const byte FirstChar = (byte)'a';
const byte LastChar = (byte)'z';

var allPossiblePasswords = from a in FirstChar.To(LastChar)
                           from b in FirstChar.To(LastChar)
                           from c in FirstChar.To(LastChar)
                           select new[] { a, b, c };

XOR encryption requires that the password be repeated cyclically, abcabcabc.... So given a sequence containing the characters of password, how about a Repeat extension method that repeats the sequence ad infinitum? (if you look at the following code and hear the Infinite Loop alarms go off, remember that iterators are lazily evaluated):

public static IEnumerable<T> Repeat<T>(this IEnumerable<T> sequence)
{
    while (true)
    {
        foreach (var item in sequence)
        {
            yield return item;
        }
    }
}

While she’s at it, she’ll need to Xor two sequences together:

public static IEnumerable<byte> Xor(this IEnumerable<byte> sequenceA, IEnumerable<byte> sequenceB)
{
    return sequenceA.Zip(sequenceB, (left, right) => (byte)(left ^ right));
}

This uses the Zip method, new in .Net 4.0, which merges two sequences together, allowing elements from left and right sequences to be processed in pairs (if you’re surprised, as I was, that the (byte) cast is necessary, see this question on Stack Overflow).

Now she can try out all the passwords, seeing what they give her if she uses them to decrypt the text. Note that Chloe has already adopted .Net 4.0 and the PLINQ extensions, so she’s using AsParallel() to max out her 128 Core workstation. (ConvertAsciiBytesToString is another extension method that simply delegates to ASCIIEncoding.GetString).

var possibleDecryptions = from trialPassword in allPossiblePasswords.AsParallel()
                          let repeatedPassword = trialPassword.Repeat()
                          select encryptedBytes
                                          .Xor(repeatedPassword)
                                          .ConvertAsciiBytesToString();

But she’s hardly going to spend the next 24 hours trawling through the possible decryptions to pick out the one in recognisable English from the 17575 in gobledygook. So she needs some means of assessing them automatically. This is where Chloe is forever in my debt. She read last weeks episode of Functional Fun, so she knows about my LanguageAnalyser that uses frequency analysis to determine how likely a sample of text is to be written in a particular language.

Rather than using the analyser to determine which of several languages the sample is written in, Chloe can use her prior knowledge that the coded message is written in English, and determine which of the possible decryptions has letter frequencies that most closely match those in English.

var languageAnalyser = new LanguageAnalyser();

var decryptionsWithDistanceToEnglish = from decryptedString in possibleDecryptions
                                       select new
                                                  {
                                                      DecryptedString = decryptedString,
                                                      Distance = languageAnalyser.DetermineCloseness(decryptedString, LanguageProfiles.English)
                                                  };

var mostLikelyDecryption = decryptionsWithDistanceToEnglish.MinItem(p => p.Distance);

And that's it. Thanks to Linguistic Geometry and LINQ, Chloe can report the decrypted (and surprisingly biblical!) message to Jack, so that he can get on with saving the world.

Ahem. Before we got carried away we were trying to solve Project Euler Problem 59, and that asks for the sum of the Ascii values of the decrypted message. So we need one more step:

var sumOfAsciiValues = mostLikelyDecryption.DecryptedString
                        .ConvertStringToAsciiBytes()
                        .Aggregate(0, (runningTotal, next) => runningTotal += next);

The full code is shown below, and is also available on MSDN Code Gallery.

namespace ProjectEuler
{
    [EulerProblem(59, Title="Using a brute force attack, can you decrypt the cipher using XOR encryption?")]
    public class Problem59
    {
        public long Solve()
        {
            const byte FirstChar = 97;
            const byte LastChar = 122;

            var dataPath = @"ProblemData\Problem59.txt";

            var encryptedBytes = File.ReadAllText(dataPath)
                                    .Split(',')
                                    .Select(byte.Parse)
                                    .ToArray();

            var allPossiblePasswords = from a in FirstChar.To(LastChar)
                                       from b in FirstChar.To(LastChar)
                                       from c in FirstChar.To(LastChar)
                                       select new[] { a, b, c };

            var possibleDecryptions = from trialPassword in allPossiblePasswords.AsParallel()
                                      let repeatedPassword = trialPassword.Repeat()
                                      select encryptedBytes
                                          .Xor(repeatedPassword)
                                          .ConvertAsciiBytesToString();

            var languageAnalyser = new LanguageAnalyser();

            var decryptionsWithDistanceToEnglish = from decryptedString in possibleDecryptions
                                                   select new
                                                              {
                                                                  DecryptedString = decryptedString,
                                                                  Distance = languageAnalyser.DetermineCloseness(decryptedString, LanguageProfiles.English)
                                                              };

            var mostLikelyDecryption = decryptionsWithDistanceToEnglish.MinItem(p => p.Distance);

            var sumOfAsciiValues = mostLikelyDecryption.DecryptedString
                .ConvertStringToAsciiBytes()
                .Aggregate(0, (runningTotal, next) => runningTotal += next);

            return sumOfAsciiValues;
        }
    }
}

Saturday, 23 January 2010

Simulating Snakes and Ladders with a PLINQ Turbo boost

“Daddy, will you play a game with me?”

I’d been left in charge, whilst my wife had some well-earned time to herself. Baby brother had been bounced to sleep, and now big sister was wanting some attention.

“Sure! What shall we play?”

My daughter fished around in the cupboard, then thumped a box on the table. My heart sank. “Snakes and Ladders”, she announced.

Snakes and Ladders

For the benefit of any reader who has the good fortune never to have thus occupied their children, Snakes and Ladders is a game played on 10 x 10 board wherein players take it in turns to move across the board boustrophedonically (that is, left to right then right to left – I’ve been itching to work that word into a blog post), on each go moving the number of spaces indicated by the throw of a die. The winner is the first person to reach the last space on the board. The only modicum of excitement to be found in the game is provided by the eponymous snakes and ladders, which connect certain squares on the board. Land on a square at the foot of a ladder, and you zoom ahead to the square at its top. But beware the squares where snakes lie in wait: land here, and you must slide back down the board to the snake’s tail.

You see then, why I wasn’t thrilled at my daughter’s choice: I don’t expect much from a children’s game, but it is nice to be sure that it will finish before one’s brain goes into power-save mode. And no mathematician would be prepared to give such a guarantee about this game: it is perfectly possible to find yourself within a throw of winning, but the next moment unceremoniously snaking your way back to the beginning, to start all over again.

So as I fixed my grin in place and started rolling the die, I got to wondering: how long till we can play something else? How many turns does a game of Snakes and Ladders last, on average? By the time I’d slid down my first snake, I knew how I could find out: I would set my computer playing snakes and ladders, over and over, and instruct it to count how many turns each game took. Apply a little high-school Maths and Stats to the results, and I’d have my answer.

I will confess the game board received little of my attention from then on, as code began to assemble in the editor in my mind (which has syntax highlighting, but is lacking Intellisense - CommonSense too, my wife would tell you). When I realised that I could use PLINQ to parallelise the code and speed up the simulation, my fingers began itching to get at the keyboard.

Here’s how the code turned out, once I fired up Visual Studio.

Modelling the Game

First, a fluent interface for defining the game board:

var board = new BoardBuilder()
            .Length(100)
            .Ladder().From(1).To(38)
            .Ladder().From(4).To(14)
            .Ladder().From(9).To(31)
            .Snake().From(16).To(6)
            .Ladder().From(21).To(42)
            .Ladder().From(28).To(84)
            .Ladder().From(36).To(44)
            .Snake().From(47).To(26)
            .Snake().From(49).To(11)
   //...
   .Build()

Then the game logic. The GameBoard built by the BoardBuilder consists of a number of Places. Each Place has a reference to the Place following it on the board, and if it is at the foot of a ladder or the head of a snake then it has a reference to the Place at the other end of the link. Most of the logic of the game simulation is contained in the MoveOn method. This method is defined recursively. It is passed the numberOfPlaces that the player should move, and it returns a reference to the Place they end up at, taking into account any snakes or ladders leaving the final square. For simplicity, I’ve decided that players cannot move beyond the last space on the board – they simply get stuck there, whatever they throw.

    class Place
    {
        private Place _ladderTo;
        private Place _chuteTo;
        private Place _nextPlace;

  // ...

        public Place MoveOn(int numberOfPlaces)
        {
            Contract.Requires(numberOfPlaces >= 0, "numberOfPlaces must be positive");

            if (numberOfPlaces > 0)
            {
                if (IsLastSpace)
                {
                    return this;
                }
                else
                {
                    return _nextPlace.MoveOn(numberOfPlaces - 1);
                }
            }

            if (_ladderTo != null)
            {
                return _ladderTo;
            }
            else if (_chuteTo != null)
            {
                return _chuteTo;
            }
            else
            {
                return this;
            }
        }

        // ...
    }
}

Running the Simulation

At last, the simulation:

var gameLengths
    = 1.To(numberOfTrials)
    .Select(trial =>
        {
            var die = new Die();
            return board.
           FirstPlace
           .Unfold(place => place.MoveOn(die.Roll()))
           .TakeWhile(place => !place.IsLastPlace)
           .Count();
        })
    .ToList();

Not much to look at, is it? I start by generating a simple sequence, 1.To(numberOfTrials), and for each item in that sequence I instruct the computer to play a LINQified game of Snakes and ladders, and to count how many moves it takes to get to the end of the board. Each game starts by creating a new Die (would be handy if we could do that in real life – we’re always losing ours). Then, using the Unfold method, seeded with the first place on the board, we generate a sequence of all the spaces landed on during that game. We pull items out of this sequence until we hit the last place on the board, at which point we count how many moves the game took.

Analysing the Results

Of course, we want to do some analysis on the results. LINQ makes it trivial to find the average length of a game:

var averageLength = games.Average();

This turns out to be 35.8 moves.

More interesting is the frequency distribution of game lengths which tells us how often we can expect to play games of a given length.

var frequencyDistribution =
    gameLengths
   .GroupBy(gameLength => gameLength)
   .OrderBy(gameLengthGroup => gameLengthGroup.Key)
   .Select(gameLengthGroup => gameLengthGroup.Key + "," + gameLengthGroup.Count() + "," + gameLengthGroup.Count() / (double)numberOfTrials)
   .ToList();
System.IO.File.WriteAllLines("SnakesAndLadderResult.csv", frequencyDistribution);

gameLengths is the sequence containing the length of each simulated game. This query is counting how many games of each length were encountered in the simulation by grouping together all games of the same length. After ordering the groups, shortest game length first, it creates a string for each group, listing the length, the absolute number of games of that length and the frequency of that length of game. File.WriteAllLines is a neat method that (from .Net 4.0 onwards) takes an IEnumerable<string> and writes each string as a new line to a file, giving us an instant CSV file from which Excel can draw a pretty chart: SnakesAndLadders_FrequenciesOfGameLengths

As you can see at a glance, the shortest game I can hope for takes 7 moves, but that’s pretty unlikely to happen – about once in a thousand games. The most common length of game encountered in the simulation was 19. Now let’s transform this into a cumulative frequency chart (which shows how often games take a given number of moves or fewer): imageThis shows that in at least half of all games I’ll need to throw the dice 28 times or more before getting to the end. Worse, there’s a 10% chance that it’ll be 68 moves or more before I beat those slippery snakes.

The PLINQ Part

Where does PLINQ come into all this? And what is PLINQ anyway? PLINQ is an extension to LINQ, part of the Parallel Tasks library that is shipping as part of .Net 4.0. You know how you splashed out for that quad-core, hyper-threading enabled processor and geeked out over all those little processor utilisation charts in Task Manager – then noticed that your CPU rarely hits more than 12.5% total utilisation? Well PLINQ helps you put those lazy cores to work. MultiCoreTaskManagerWatch closely:

var gamesLengths
    = 1.To(numberOfTrials)
    .AsParallel()
    .Select(trial =>
        {
            var die = new Die();
            return board.
        FirstPlace
        .Unfold(place => place.MoveOn(die.Roll()))
        .TakeWhile(place => !place.IsLastPlace)
        .Count();
        })
    .ToList();

Spot it? Check out line 3. AsParallel(), that’s all you need to add. It takes a standard Linq-to-Objects query, and splits the work across multiple threads. On my bog-standard dual-core home laptop, that change took the time to run 1 million iterations down from 58 seconds to 36. That’s a 1.5x speed-up – by changing one line of code! And the best thing is, the speed-up increases as the number of cores in your machine goes up. On my Core i7 Quad core box at work, the simulation took 7 seconds in its parallelised form – and 9 seconds without the AsParallel magic! Still a 1.3x speed up – but overshadowed by the sheer awesomeness of the Nehalem architecture!

I should point out one subtlety here. You might have been wondering why it was necessary to create a new Die for every game? Why not share a die between games, like you do in the real world? The reason is the multiple threads introduced by the AsParallel() call. The Roll() method on Die calls through to Random.Next() under the covers, and the Next method isn’t thread-safe.

The easiest fix then, is to give each game its own Die. But there’s another thing: with all those threads beavering away simultaneously there’s a good chance that several Die will be created at the same time, each initializing their own instance of Random at the same time. Since Random by default seeds itself from the system clock, this would result in identical games being played – not exactly representative of the real world.

So what I do is have a static instance of Random which I use to seed all the others, taking care to take a lock on it of course:

class Die
{
    private Random _random;
    private static Random _seedGenerator = new Random();

    public Die()
    {
        lock (_seedGenerator)
        {
            _random = new Random(_seedGenerator.Value.Next());
        }
    }

    public int Roll()
    {
        return _random.Next(1, 7);
    }
}

Try it for yourself

As always the complete source code is available for download from MSDN Code Gallery. Try it out, and let me know what you think.

Thursday, 19 November 2009

Future Directions for C# and Visual Basic

I’m having great fun watching the Microsoft PDC 2009 session videos, and blogging the highlights for future reference. In case you want to jump into the video, I’ve included some time-checks in brackets (0:00). Read the rest of the series here.

Luca Bolognese opened his session Future Directions for C# and Visual Basic by announcing that the strategy for future development of C# and Visual Basic is one of co-evolution. As he said, the previous strategy where each language was independent, and one would get new features that the other didn’t was “maximising unsatisfaction”. Now there is one compiler team responsible for both languages, and any big new features added to one will get added to the other. VS 2010 already brings the two languages much closer together, and this will increase in the future.

In the first half of the presentation, Luca talked about the three big trends in current languages: Declarative, Dynamic and Concurrent.LanguageTrendsIn a demo (starting at 6:20 in the video) Luca created a CSV file parser. He showed (12:08) how writing the code in declarative style (using LINQ) not only makes it easier to read, it also makes it easier to parallelize. As simple as adding AsParallel() to the middle of the query in fact (15:18). The Task Parallel Library (part of .Net 4) makes it possible to parallelize imperative code (for loops, etc.) but with much greater potential for bugs like unsynchronized collection access (16:20). Luca then went on to demonstrate the dynamic language features in C# 4, and the DynamicObject base class (24:38).

Then he turned to the future, but not without a disclaimer that there were no promises of anything he talked about actually shipping. It seems, however, that Microsoft are pretty firmly committed to the first item he mentioned: rewriting of the C# compiler in C# and the VB.Net compiler in VB.Net, and opening up the black box so that we can make use of the lexer, parser, code generator, etc. From what he said later on, I gather that most of the team are currently involved in completing this work.

Luca demonstrated (36:00) how in just 100 lines of code he could use the new APIs to create a refactoring that would re-order the parameters of a method (and take care of the call site). Previously, anyone wanting to do something like this would have first needed to write a lexer and a parser, but that would be provided for us.

RefactoringExample

The last demonstration was of something Luca called “resumable methods”. These appear to be something akin to async workflows in F#. By prefixing an expression with the “yield” statement, Luca indicated to the compiler that he wanted the method to be called asynchronously (50:30). The compiler then takes care of rewriting the code so that execution resumes at the next statement once the asynchronous execution of the first statement completes. The benefit of this is that the thread can be used for something else meanwhile. By getting the compiler to do the rewriting we can avoid a whole lot of ugly code (see the video at 46:24). ResumableMethodsImplementationOne other thing that Luca mentioned is being considered by the team is support for immutability (52:41) . He said that they had considered 4 or 5 different designs but hadn’t yet settled on one that was exactly right. Part of the problem is that so much of the language is affected: not just the types themselves, but parameters, fields, etc.

If you want more on this, read Sasha Goldstein’s account of Luca’s talk.

Thursday, 12 November 2009

PDC 2009 @ Home

Can you believe it? A year flown by already since I jetted off to LA, reclined to the soothing sound of Scott Gu's Keynote, and all-round indulged in a general geek-out at Microsoft PDC 2008! And already Microsoft PDC 2009 is upon us.

Windows 7, which we gawped at for the first time twelve months ago, is now gracing our desktops; Windows Azure is almost ready to go live; and after a good 52 weeks of head-scratching developers are just beginning to work out what Oslo is and isn't good for.

What with our baby being born and Bankers busting our economy, I'm not going to be present in person at the PDC this year. But the Directors at Paragon have very kindly granted us all a couple of days to be present in spirit by taking advantage of the session videos that Microsoft generously post online within 24 hours of the event.

So which sessions will I be watching?

Top of my list are the PDC specialties, the sessions usually flagged up with the magic word "Future" in their title. What developer doesn't go to PDC with an ear greedy for announcements of shiny new features for their favourite platform?

The other thing PDC is famous for are “deep-dives”: talks by the architects and developers of the various technologies on show, laying bare the inner workings of their creations. This year I’ll probably focus on WPF and Silverlight.

As well as watching these sessions, I hope to find some time over the next week for blogging about my discoveries. So why don’t you follow along?

Thursday, 27 November 2008

Conquering VB envy with C#4.0

Whilst your average C# programmer probably considers himself superior to those poor people who habitually use VB.Net, he is at the same time silently envious of one or two (but certainly no more than that!) features of the deluded ones' language. Concise, hassle-free, and above all, Type.Missingless COM Interop for one. And deep XML integration for two. But with C#4.0, and Anders' masterly introduction of objects that can be statically typed as dynamic, that will change. Gone will be all envy; in its place, pure pity. ;-) Here's one example for why.

I finally found time (we've just finished our iteration), and an excuse (I've got to give a briefing on PDC), to delve into my smart black USB hard disk ("the goods") that I brought back from the PDC, and to fire up the Virtual Hard Drive where lives Visual Studio 2010 and C# 4.0. As an aside, I'm running it under VirtualBox, rather than Virtual PC, without any difficulties at all, and with very good performance.

In Ander's presentation about the Future of C# he showed a simple example of how the dynamic features of C# 4.0 could be used to implement a property bag, where you could write (and read) arbitrary properties on the bag and it would store whatever you assigned to it. The punch line was that he implemented it in a couple of lines of code.

That gave me the idea that it shouldn't be too difficult to simulate something like VB.Net XML integration, where you can take an XML object, and access its attributes and elements as if they were properties of the object. And it wasn't. Now don't get too excited: I spent about fifteen minutes on this, but the outcome should be enough to whet your appetite.

First, the end-result:

static void Main(string[] args)
{
    var xml = "<Pet Type='Cat' Name='Leo'><Owner Name='Sam' Age='27'/></Pet>";
    dynamic dynamicXml = new DynamicXElement(xml);

    Console.WriteLine("Name={0}, Type={1}", dynamicXml.Name, dynamicXml.Type);
    Console.WriteLine("Owner={0}, Age={1}", dynamicXml.Owner.Name, dynamicXml.Owner.Age);
    Console.ReadLine();
}

In line 4 I'm creating one of these new-fangled dynamic object thingies, declaring it with the magical dynamic keyword. The dynamic keyword tells the compiler to do all member resolution on the object at run time, rather than compile time. And how does it do the member resolution? That's where the DynamicXElement comes in. The DynamicXElement is equipped with the right knobs and levers so that it can participate in dynamic resolution. It must be tremendously complicated then? You'll see in a minute.

Lines 6 and 7 show off the amazing capabilities of this DynamicXElement. Having supplied it with some XML when we initialised it, we can now access values of attributes just as if they were properties of the element. Likewise with child elements and their properties. Isn't that exciting? Doesn't it deserve some applause (as the more desperate of the PDC presenters might say!)?

So how does dynamic binding work? I won't pretend to you that I understand it fully at the moment. What I do know is that if you want to have a say in how member resolution works, your objects have to implement IDynamicObject. I also know that in .Net 4.0 there will be a base implementation of that interface called DynamicObject (follow the link to get the code for it); if you inherit from that it takes care of the minor complications for you, leaving your code to be as simple as:

class DynamicXElement : System.Dynamic.DynamicObject
{
    XElement _xml;

    public DynamicXElement(string xml)
    {
        _xml = XElement.Parse(xml);
    }

    public DynamicXElement(XElement element)
    {
        _xml = element;
    }

    public override object GetMember(System.Scripting.Actions.GetMemberAction info)
    {
        var attribute = _xml.Attribute(info.Name);

        if (attribute != null)
        {
            return attribute.Value;
        }

        return new DynamicXElement(_xml.Element(info.Name));
    }
}

As you can see, I only need to override one method, GetMember, to give me the control I need of dynamic behaviour (there are other overrides available if you want to handle calls to property setters or methods). This method is called whenever the runtime wants to resolve a call on a property getter. The GetMemberAction object that is passed to the method contains a Name property giving, unsurprisingly, the name of the property that's being asked for. In my ultra-simple implementation, I'm using that to first check for the existence of an attribute with that name; or, failing that, assuming that it must be the name of a child element. I'm returning the child element in a DynamicXElement wrapper so its attributes and children can be accessed in the same dynamic way.

Simple wasn't it?

But please look gently at this code. I know that it will fall to pieces in anything approaching a real-world situation (like when an element has an attribute and a child element with the same name). If you want to see some more substantial examples, look at Nikhil Kothari's blog. Here's his dynamic JSON wrapper, for instance.