Custom IKinectController – Drag and Drop

Drag and Drop sample with IKinectManipulateableController

This sample shows how you can implement a custom KinectControl, for example to move things around in a Canvas.

To hook into the whole KinectRegion magic you can implement your own UserControl that implements IKinectControl.
When you move your hand KinectRegion keeps track of the movements and will constantly check if there is some Kinect enabled control at the current hand pointer’s position.
To determine such a control the KinectRegion looks for IKinectControls.

The IKinectControl interface forces you to implement the method IKinectController CreateController(IInput inputModel, KinectRegion kinectRegion).
The IKinectControl is also required to implement IsManipulatable and IsPressable.
This way you are specifying on what gestures your control will react to.
Because this sample shows how to move controls around the Draggable control’s property IsManipulatable returns true.

UserControl Draggable.cs implements IKinectControl

public sealed partial class Draggable : UserControl, IKinectControl
{
    public Draggable()
    {
        this.InitializeComponent();
    }

    public IKinectController CreateController(IInputModel inputModel, KinectRegion kinectRegion)
    {
        // Only one controller is instantiated for one Control
        var model = new ManipulatableModel(inputModel.GestureRecognizer.GestureSettings, this);
        return new DragAndDropController(this, model, kinectRegion);
    }

    public bool IsManipulatable { get { return true; } }

    public bool IsPressable { get { return false; } }
}

You’ll find two interfaces in the SDK *.Controls namespace that implement IKinectController and of course they are closely related to the other properties you have to implement.

Those interfaces are:
– IKinectPressableController
– IKinectManipulateableController

Since IsManipulatable returns true CreateController should return an instance of IKinectManipulateableController.
For an example how to implement IKinectPressableController see here.

Controller class DragAndDropController

public class DragAndDropController : IKinectManipulatableController, IDisposable
{
}

The DragAndDropController‘s constructor gets a reference to the manipulatable control, an IInputModel and a reference to the KinectRegion.

public DragAndDropController(FrameworkElement element, ManipulatableModel model, KinectRegion kinectRegion)
{
    this.element = new WeakReference(element);
    this.kinectRegion = kinectRegion;
    this.inputModel = model;

    if (this.inputModel == null)
        return;
...

The ManipulatableModel provides four events you can subscribe to in order to react to user input.
This sample uses the nuget package Kinect.ReactiveV2.Input that provides specific Rx-extension methods to subscribe to these events.

...
    this.eventSubscriptions = new CompositeDisposable 
    {
        this.inputModel.ManipulationStartedObservable()
                       .Subscribe(_ => VisualStateManager.GoToState(this.Control, "Focused", true)),

        this.inputModel.ManipulationInertiaStartingObservable()
                       .Subscribe(_ => Debug.WriteLine(string.Format("ManipulationInertiaStarting: {0}, ", DateTime.Now))),

        this.inputModel.ManipulationUpdatedObservable()
                       .Subscribe(_ => OnManipulationUpdated(_)),

        this.inputModel.ManipulationCompletedObservable()
                       .Subscribe(_ => VisualStateManager.GoToState(this.Control, "Unfocused", true)),
    };
}

All subscriptions are composed into one CompositeDisposable that is disposed in the controllers’s Dispose() method.

  • ManipulationStartedObservable –> is fired when the user closes it’s hand
  • ManipulationInertiaStartingObservable –> ???
  • ManipulationUpdatedObservable –> is fired when the user moves it’s hand while keeping it closed
  • ManipulationCompletedObservable –> is fired when the user releases it’s hand

For this sample the most interesting observable is ManipulationUpdatedObservable. Everytime this event is fired the method OnManipulationUpdated is called.

private void OnManipulationUpdated(KinectManipulationUpdatedEventArgs args)
{
    var dragableElement = this.Element;
    if (!(dragableElement.Parent is Canvas)) return;

    var delta = args.Delta.Translation;
    var translationPoint = new Point(delta.X, delta.Y);
    var translatedPoint = InputPointerManager.TransformInputPointerCoordinatesToWindowCoordinates(translationPoint, this.kinectRegion.Bounds);

    var offsetY = Canvas.GetTop(dragableElement);
    var offsetX = Canvas.GetLeft(dragableElement);

    if (double.IsNaN(offsetY)) offsetY = 0;
    if (double.IsNaN(offsetX)) offsetX = 0;

    Canvas.SetTop(dragableElement, offsetY + translatedPoint.Y);
    Canvas.SetLeft(dragableElement, offsetX + translatedPoint.X);
}

If this IKinectControl is placed inside a Canvas it’s position is updated relative to the users hand.

The complete code is found here.
This article in markdown is found here.

HeiRes – MS Hackathon – PartyCrasher

On 23rd May 2014 the company HeiRes organized together with Microsoft Germany a hackathon in my hometown Dresden. The goal was to build an app in about 8 hours through the night.
I joined a team with Christian and our idea was to build a ‘PartyCrasher’-App. When you leave your kids alone at home over the weekend they may get the idea to have a party at your house which you may not want because they made a mess the last time. So why don’t you set up the PartyCrasher App in you living room that consists of a Kinect for Windows controller that is hooked to a little PC. The Kinect watches the scene in the living room and starts to take pictures as soon as there are more than a specified number of people, for example more than 3. At this time the Kinect starts to take pictures in an interval that you can configure (maybe every minute).

So we’ve started the hackathon with this little architecture in our mind.
PartyCrasher

In this blog post I want to share how easy it was to program this little Kinect service (console application) that is hooked to the Kinect hardware. We’ve made use of the Kinect for Windows SDK V2 Preview, Kinect.ReactiveV2 and Rx to implement the photo shooting every specified amount of time as soon as there a more than the specified number of people in the scene.

using Kinect.ReactiveV2;
using Microsoft.Kinect;
using System;
using System.Linq;
using System.Reactive.Linq;

static void Main(string[] args)
{
  var kinect = KinectSensor.Default;
  kinect.Open();

  var frameDescription = kinect.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Rgba);
  var bytes = new byte[frameDescription.Width * frameDescription.Height * 4];

  var moreThanPeople = 3;
  var intervalInSeconds = 60;

  var reader = kinect.ColorFrameSource.OpenReader();
  var bodies = new Body[6];

  var subscription = kinect.BodyFrameArrivedObservable()
                           .SelectBodies(bodies)
                           .SelectTracked()
                           .Where(_ => _.Count() > moreThanPeople)
                           .Sample(TimeSpan.FromSeconds(intervalInSeconds))
                           .Subscribe(bs =>
                           {
                             using(var frame = reader.AcquireLatestFrame())
                             {
                               if(frame == null) return;
                               frame.CopyConvertedFrameDataToArray(bytes, ColorImageFormat.Rgba);
                             }

                             SaveInBlobStorage(frameDescription, bytes);
                           });

  Console.WriteLine("[ENTER] to stop");
  Console.ReadLine();

  subscription.Dispose();
  kinect.Close();
}

We’ve continued the hackathon with implementing the bits that saved the pictures in Azure BlobStorage. The file references to the BlobStorage were saved in a ravenDB on ravenHQ. Later on we’ve implemented an ASP.NET MVC service on Azure websites that served the pictures taken to a Windows Store App. While implementing the picture download in the Windows Store App we unfortunately ran out of time. This was the App when we had to stop.
PartyCrasherApp

Anyways the whole hackathon was really good fun. Big thanks to HeiRes and Microsoft and maybe some time in the future we’ll finish the PartyCrasher-App.

Kinect.ReactiveV2 – Rx-ing the Kinect for Windows SDK

A few weeks ago I was finally able to get my hands on to the new Kinect for Windows V2 SDK. There are a few API changes compared to V1. So I started to port Kinect.Reactive to the new Kinect for Windows Dev Preview SDK and Kinect.ReactiveV2 was born.

Kinect.ReactiveV2 is, as it’s older brother, a project that contains a bunch of extension methods that should ease the development with the Kinect for Windows SDK. The project uses the ReactiveExtensions (an open source framework built by Microsoft) to transform the various Kinect reader events into IObservable<T> sequences. This transformation enables you to use Linq style query operators on those events.

Here is an example of how to use the BodyIndexFrame data as an observable sequence.

using System.Linq;
using System.Reactive;
using Microsoft.Kinect;
using Kinect.ReactiveV2;

var sensor = KinectSensor.Default;
sensor.Open();

var bodyIndexFrameDescription = sensor.BodyIndexFrameSource.FrameDescription;
var bodyIndexData = new byte[bodyIndexFrameDescription.LengthInPixels];

sensor.BodyIndexFrameArrivedObservable()
      .SelectBodyIndexData(bodyIndexData)
      .Subscribe(data => someBitmap.WritePixels(rect, data, stride, 0));

You’ll also get an extension method called SceneChanges() on every KinectSensor instance which notifies all it’s subscribers whenever a person entered or left a scene.

using System;
using System.Linq;
using System.Reactive;
using Microsoft.Kinect;
using Kinect.ReactiveV2;

var sensor = KinectSensor.Default;
sensor.Open();

sensor.SceneChanges()
      .Subscribe(_ =>
      {
            if (_.SceneChangedType is PersonEnteredScene)
            {
                  Console.WriteLine("Person {0} entered scene", _.SceneChangedType.TrackingId);
            }
            else if (_.SceneChangedType is PersonLeftScene)
            {
                  Console.WriteLine("Person {0} left scene", _.SceneChangedType.TrackingId);
            }
      });

Until now there are extension methods included for the BodyFrame, BodyIndexFrame, ColorFrame, DepthFrame, InfraredFrame and MultiSourceFrame.

The source code is available here.
Download the nuget package from here, or directly typing Install-Package Kinect.ReactiveV2 into the package manager console.

Please be aware that “This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

ContinousGrippedState in Kinect.Reactive

For a while now I was wondering why the Kinect’s InteractionStream sends only one InteractionHandEventType.Grip when the user closes its hand. While the user still holds its hand in a closed state the SDK will fire events that have a HandEventType of None. This confused me from the very beginning. Compared to mouse events you’ll get continous mousedown events when the user does not release the mouse button.

So I thought about a way to get the same functionality when using the Kinect for Windows SDKs 1.x InteractionStream.
This extension method solved my problem and is now part of Kinect.Reactive:

/// <summary>
/// Returns a sequence with continuous GrippedState HandEventType until GripRelease.
/// </summary>
/// <param name="source">The source observable.</param>
/// <returns>The observable.</returns>
public static IObservable<UserInfo[]> ContinousGrippedState(this IObservable<UserInfo[]> source)
{
  if (source == null) throw new ArgumentNullException("source");

  var memory = new Dictionary<Tuple<int, InteractionHandType>, object>();
  var propInfo = typeof(InteractionHandPointer).GetProperty("HandEventType");
  var handEventTypeSetter = new Action<InteractionHandPointer>(o => propInfo.SetValue(o, InteractionHandEventType.Grip));

  return source.Select(_ =>
  {
    _.ForEach(u => u.HandPointers.ForEach(h =>
   {
     if (h.HandEventType == InteractionHandEventType.Grip)
     {
        memory.Add(Tuple.Create(u.SkeletonTrackingId, h.HandType), null);
     }
     else if (h.HandEventType == InteractionHandEventType.GripRelease)
     {
        memory.Remove(Tuple.Create(u.SkeletonTrackingId, h.HandType));
     }
     else if (memory.ContainsKey(Tuple.Create(u.SkeletonTrackingId, h.HandType)))
     {
        handEventTypeSetter(h);
     }
   }));

   return _;
  });
}

Use the extension method this way and you’ll get continously events with e.HandEventType == InteractionHandEventType.Grip until you’ll release your hand.

IDisposable subscription = null;

KinectConnector.GetKinect().ContinueWith(k =>
{
  var disp = k.Result.KickStart(true)
              .GetUserInfoObservable(new InteractionClientConsole())
              .ContinousGrippedState()
              .SelectMany(_ => _.Select(__ => __.HandPointers.Where(CheckForRightGripAndGripRelease)))
              .Subscribe(_ => _.ForEach(__ => Console.WriteLine(String.Format("Active: {0}, HandEventType: {1}", __.HandType, __.HandEventType))));
                
  subscription = disp;
});

Console.WriteLine("Waiting...");
Console.ReadLine();

Drag & Drop with Kinect for Windows

With the inclusion of the InteractionStream and the ability to detect a Grip gesture in the Kinect for Windows SDK Update 1.7 it’s now possible to grab UI elements on a screen and move them around. This blog post shows a possible implementation in a WPF application. Please notice that I’m using the following nuGet packages

Code-Behind: MainWindow.cs

// this code can be called after initialization of the MainWindow

// Get a kinect instance with started SkeletonStream and DepthStream
var kinect = await KinectConnector.GetKinect();
kinect.KickStart();

// instantiate an object that implements IInteractionClient
var interactionClient = new InteractionClient();

// GetUserInfoObservable() method is available through Kinect.Reactive
kinect.GetUserInfoObservable(interactionClient)
      .SelectMany(_ => _.Select(__ => __.HandPointers.Where(___ => ___.IsActive)))
      .Where(_ => _.FirstOrDefault() != null)
      .Select(_ => _.First())
      .ObserveOnDispatcher()
      .Subscribe(_ =>
      {
            var region = this.kinectRegion;
            var p = new Point(_.X * region.ActualWidth, _.Y * region.ActualHeight);
            if (_.HandEventType == InteractionHandEventType.Grip)
            {
                  var elem = this.kinectRegion.InputHitTest(p) as TextBlock));
                  if (elem != null)
                  {
                        this.lastTouched = elem;
                  }									  
            }
            else if(_.HandEventType == InteractionHandEventType.GripRelease)
            {
                  this.lastTouched = null;
            }
            else
            {
                  if (this.lastTouched == null) return;
                  
                  Canvas.SetLeft(this.lastTouched, p.X - this.lastTouched.ActualWidth / 2);
                  Canvas.SetTop(this.lastTouched, p.Y - this.lastTouched.ActualHeight / 2);
            }
	});

XAML: MainWindow.xaml


<Window x:Class="DragAndDrop.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:dd="clr-namespace:DragAndDrop"
        xmlns:k="clr-namespace:Microsoft.Kinect.Toolkit.Controls;assembly=Microsoft.Kinect.Toolkit.Controls"
        Title="MainWindow" WindowState="Maximized">
    <Window.Resources>
        <Style TargetType="TextBlock">
            <Setter Property="Height" Value="200" />
            <Setter Property="Width" Value="200" />
            <Setter Property="Foreground" Value="White" />
            <Setter Property="FontWeight" Value="ExtraBold" />
            <Setter Property="FontSize" Value="35" />
            <Setter Property="Text" Value="Drag Me" />
            <Setter Property="TextAlignment" Value="Center" />
            <Setter Property="Background" Value="Black" />
        </Style>
    </Window.Resources>
    <k:KinectRegion x:Name="kinectRegion" KinectSensor="{Binding Kinect}">
        <Grid>
            <Grid.RowDefinitions>
                <RowDefinition Height="100" />
                <RowDefinition Height="*" />
            </Grid.RowDefinitions>
            <Grid Grid.Row="0">
                <k:KinectUserViewer x:Name="userViewer" />
            </Grid>
            <Canvas Grid.Row="1">
                <TextBlock Canvas.Left="50" Canvas.Top="50" />
                <TextBlock Canvas.Left="260" Canvas.Top="50" />
                <TextBlock Canvas.Left="470" Canvas.Top="50" />
                <TextBlock Canvas.Left="680" Canvas.Top="50" />
                <TextBlock Canvas.Left="890" Canvas.Top="50" />
            </Canvas>
        </Grid>
    </k:KinectRegion>
</Window>

Subscribing to the InteractionStream the Rx way

The most exciting feature in the Kinect for Windows SDK Update 1.7 was probably the InteractionStream. The InteractionStream is based on the SkeletonStream and the DepthStream and it enables you to detect basic interactions like a grip gesture or a button push in a Kinect for Windows application.
Since the InteractionStream needs Skeleton- and DepthData for its calculations you have to provide the InteractionStream with depth and skeleton data whenever new frames are available.
A straight forward approach to do that could be the following:

var skeletonData = // initialize array
var depthData = // initialize array
var kinect = // somehow get a Kinect sensor instance
IInteractionClient interactionClient = // a class that implements IInteractionClient

var interactionStream = new InteractionStream(kinect, interactionClient);

kinect.AllFramesReady += (s, e) =>
{
    long skeletonTimestamp = 0;
    long depthTimestamp = 0;
    var accelerometerReading = kinect.AccelerometerGetCurrentReading();

    using (var depthImageFrame = e.OpenDepthImageFrame())
    using (var skeletonFrame = e.OpenSkeletonFrame())
    {
      if (depthImageFrame == null || skeletonFrame == null) return;

      skeletonFrame.CopySkeletonDataTo(skeletonData);
      skeletonTimestamp = skeletonFrame.Timestamp;
      depthData = depthImageFrame.GetRawPixelData();
      depthTimestamp = depthImageFrame.Timestamp;
    }

    interactionStream.ProcessDepth(depthData, depthTimestamp);
    interactionStream.ProcessSkeleton(skeletonData, accelerometerReading, skeletonTimestamp);
};

interactionStream.InteractionFrameReady += OnInteractionFrameReady;

// The method that handles the InteractionFrameReady events
private void InteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
    UserInfo[] userInfos = // initialize array
    using (var interactionFrame = e.OpenInteractionFrame())
    {
      if (interactionFrame != null)
        interactionFrame.CopyInteractionDataTo(userInfos);
    }

    // do something with the UserInfos array
}

Since I am a huge fan of the ReactiveExtensions framework I have tried to find a solution to encapsulate this code in one method and produce an IObservable. I wanted to be able to subscribe to the InteractionStream in the same ‘Rx way’ as I am used to to subscribe to the SkeletonStream for example. So the goal was to have something like this:

kinect.InteractionStreamObservable().Subscribe(userInfos => 
{
    // do something useful with the userInfos
});

This is the solution I have found and it is already included in the Kinect.Reactive NuGet package.

public static IObservable<UserInfo[]> GetUserInfoObservable(this KinectSensor kinectSensor, IInteractionClient interactionClient)
{
    // null checks and checks if streams are enabled

    return Observable.Create<UserInfo[]>(obs =>
    {
      var stream = new InteractionStream(kinectSensor, interactionClient);
      var allFramesSub = 
        kinectSensor.GetAllFramesReadyObservable()
                    .SelectStreams((_, __) => Tuple.Create(_.Timestamp, __.Timestamp))
	                .Subscribe(_ =>
	                {
                          var accelerometer = kinectSensor.AccelerometerGetCurrentReading();
                          stream.ProcessSkeleton(_.Item3, accelerometer, _.Item4.Item1);
                          stream.ProcessDepth(_.Item2, _.Item4.Item2);
	                });

      stream.GetInteractionFrameReadyObservable()
	        .SelectUserInfo()
	        .Subscribe(_ => obs.OnNext(_));

      return new Action(() =>
      {
        allFramesSub.Dispose();
        stream.Dispose();
      });
  });
}

Subscribing to the InteractionStream is now very easy including the benefits of the Rx-Framework.

kinect.GetUserInfoObservable(new InteractionClient ())
      .SelectMany(_ => _.Where(userInfo => userInfo.SkeletonTrackingId == 1))
      .SelectMany(_ => _.HandPointers.Where(handPointer => handPointer.HandType == InteractionHandType.Right))
      .// and so on…
 

await GetKinect()

In the first blog post about FluentKinect I’ve mentioned that I’m not very happy with the actual process of getting a KinectSensor instance from the KinectSensors collection.

FluentKinect has now been updated and the KinectConnector’s static method GetKinect is now awaitable.

If you now call KinectConnector.GetKinect() and no KinectSensor is connected to your PC a new Task is started that listens for StatusChanged events of the KinectSensor collection. If you later plugin a Kinect controller the Task returns the connected KinectSensor instance.

KinectConnector’s GetKinect method:

public static Task<KinectSensor> GetKinect()
{
	return Task.Factory.StartNew<KinectSensor>(() =>
	{
		if (kinectSensor != null) return kinectSensor;

		var kinect = KinectSensor.KinectSensors
							.FirstOrDefault(_ => _.Status == KinectStatus.Connected);
		if (kinect != null)
		{
			kinectSensor = kinect;
			return kinectSensor;
		}

		using (var signal = new ManualResetEventSlim())
		{
			KinectSensor.KinectSensors.StatusChanged += (s, e) =>
			{
				if (e.Status == KinectStatus.Connected)
				{
					kinectSensor = e.Sensor;
					coordinateMapper = new CoordinateMapper(kinectSensor);
					signal.Set();
				}
			};

			signal.Wait();
		}

		return kinectSensor;
	});
}

How to use it:

var kinect = await KinectConnector.GetKinect();
kinect.Start();

It’s not thread safe at the moment but there are a few improvements now in my opinion.

    • You can start and debug your actual program without a Kinect controller connected because no more exception is thrown. This helps with Kinect programming on a plane for example. 😉
    • Your programm starts faster because GetKinect returns immediately
    • Since GetKinect returns a Task you get the ability to await the result

The code was pushed to GitHub and the nuget package FluentKinect has been updated as well.

Looking forward to new improvements!

Fluent Kinect

Since I have been playing around with the Kinect for Windows SDK I’ve created a lot of little new projects and samples to try things out. Starting point was always something like this:

var sensor = KinectSensor.KinectSensors
                         .FirstOrDefault(_ => _.Status == KinectStatus.Connected);
if (sensor == null) throw new InvalidOperationException("No kinect connected");

sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
sensor.SkeletonStream.EnableTrackingInNearRange = true;
sensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated;
sensor.SkeletonStream.Enable();
sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
sensor.Start();

A lot of code just to set up a Kinect sensor, isn’t it?

Why not using a fluent style with less and cleaner code to set up a Kinect Sensor? So I came up with the idea of FluentKinect, a project with a few extension methods. Now I can set up my Kinect Sensor this way:

var sensor = KinectSensor.KinectSensors
                         .FirstOrDefault(_ => _.Status == KinectStatus.Connected);
if (sensor == null) throw new InvalidOperationException("No kinect connected");

sensor.EnableColorStream()
      .EnableSkeletonStream()
      .EnableDepthStream()
      .Seated()
      .NearMode()
      .Start();

Because I most often use the 640×480 option anyway, the format is an optional parameter when enabling the streams and it defaults to *640x480Fps30.
I’ve exracted the two little lines that gets the first connected Kinect Sensor to a class called KinectConnector. At the moment an exception is thrown when no Kinect unit is connected. This is not a very good way of handling this scenario and will be changed in the future.
Now the code is even cleaner:

var sensor = KinectConnector.GetKinect()
                            .EnableColorStream()
                            .EnableSkeletonStream()
                            .EnableDepthStream()
                            .Seated()
                            .NearMode()
                            .Start();

For an even shorter and quicker Setup I’ve implemented the method ‘KickStart’ which enables the three streams and calls Start() on the KinectSensor object.
For future ‘try out samples’ I’ll just have to code this now:

var sensor = KinectConnector.GetKinect()
                            .KickStart();

Kinect.Reactive

Events in the .NET programming model really don’t have anything in common with what the Object Oriented paradigm has taught us. Personally I dislike events because they are not first class objects but some kind of background compiler voodoo.

When it comes to writing against an event driven API like the Kinect for Windows SDK you don’t have much of a choice but programming against those events. But wait, there is this wonderful ReactiveExtensions library that comes to the rescue. These classes and extension methods give you the possibility to easily wrap an event in an object and furthermore the library provides you with tons of handy methods that you don’t have to code yourself.

So I decided to write my own IObservable extension methods to extend the Kinect API with the ReactiveExtensions programming model.
Here are two methods to give you an idea of what I’m talking about:

public static IObservable<AllFramesReadyEventArgs> GetAllFramesReadyObservable(this KinectSensor kinectSensor)
{
   if(kinectSensor == null) throw new ArgumentNullException("kinectSensor");

   return Observable.FromEventPattern<AllFramesReadyEventArgs>(h => kinectSensor.AllFramesReady += h,
                                                               h => kinectSensor.AllFramesReady -= h)
                    .Select(e => e.EventArgs);
}

public static IObservable<ColorImageFrameReadyEventArgs> GetColorFrameReadyObservable(this KinectSensor kinectSensor)
{
   if (kinectSensor == null) throw new ArgumentNullException("kinectSensor");

   return Observable.FromEventPattern<ColorImageFrameReadyEventArgs>(
                                                           h => kinectSensor.ColorFrameReady += h,
                                                           h => kinectSensor.ColorFrameReady -= h)
                    .Select(e => e.EventArgs);
}
[...]

And so on…