3-D Graphics Overview

Silverlight 5 introduces the ability to use hardware accelerated 3D graphics in your Silverlight applications. This opens up a whole new set of scenarios that are possible in Silverlight, such as 3D drawn controls, data visualizers, 3D charts, scatter points, geographic overlays, and 3D games and simulations. This topic goes over using 3D graphics in Silverlight.

The core of the XNA Games Studio 4.0 graphics libraries is now included in Silverlight 5 and is used to create 3D graphics. Developers familiar with XNA will be able to use their existing knowledge to help get them quickly up to speed with Silverlight 3D graphics programming.

3D graphics are an illusion that creates the perception of depth on a two-dimensional surface. It is possible to use standard 2D graphics already in Silverlight to create the illusion of 3D, but this would quickly become very complex. The core XNA graphics libraries are tailored specifically for 3D graphics and make the problem much easier to solve.

Silverlight introduces a number of new classes to support 3D graphics. The DrawingSurface control is the main Silverlight control used to render 3D graphics. Silverlight also includes the core of the XNA Graphics library from XNA Game Studio 4.0. The set of new classes is listed below.

Namespaces and 3D Classes

The following table lists the Silverlight namespaces that contain 3D related classes.




Contains common Silverlight 3D graphics structures and classes.

It includes the Microsoft.Xna.Framework.Color and Microsoft.Xna.Framework.Rectangle structures. Note that these structures are different than the System.Windows.Media.Color and System.Windows.Shapes.Rectangle classes already present in Silverlight.


Contains classes to load and play audio.


Contains the majority of the XNA 3D graphics and effects classes.


Contains data types with components that are not multiples of 8 bits.


Contains DrawingSurface and DrawEventArgs.


The BitmapSource class has been extended with the BitmapSourceExtensions CopyTo method to create a Texture from a bitmap image.

Differences between Silverlight and XNA

The following list calls out a number of the differences in Silverlight 3D graphics and XNA. These points are discussed in more detail in this article.

  1. Silverlight only supports the Reach profile.

  2. Silverlight supports Shader Model 2.0 only. Silverlight does not support Shader Model 3.0.

  3. Silverlight 3D graphics use the Color and Rectangle structures from Microsoft.Xna.Framework. These have name collisions with the System.Windows.Media.Color and System.Windows.Shapes.Rectangle already in Silverlight and are not compatible.

  4. Software rendered 3D graphics is not supported in Silverlight. A compatible graphics device is required on the system.

Driver Support

For security and compatibility reasons, some drivers will be blocked by default in the browser. All Windows XP Display Driver Model (XPDM) drivers on Windows XP, Windows Vista, and Windows 7 will be blocked by default. If a driver is blocked, permissions can be granted by the user through the Microsoft Silverlight Configuration settings. Permission is granted automatically in elevated trust scenarios. Windows Display Driver Model (WDDM) drivers do not require user consent at run-time. Blocked drivers can be detected and developers can tailor the user experience to address this.

You can use the RenderMode property on the GraphicsDeviceManager to detect if rendering is Unavailable and the RenderModeReason property to determine why. You can use this information to inform the user that they may need to explicitly give permissions for the driver in the Microsoft Silverlight Configuration settings.

Enabling 3D Graphics

Since 3D graphics in Silverlight are hardware accelerated, the EnableGPUAcceleration parameter must be set to true on the Silverlight plug-in in order for the graphics to be rendered. If the parameter is not set, the scene will not render.

The following example shows how to set the parameter on the <object> tag in the HTML page.

<object data="data:application/x-silverlight-2," 
   <param name="EnableGPUAcceleration" value="true" />
    <!--  Set other parameters here  -->

DrawingSurface and Draw

DrawingSurface is the UI control in Silverlight that renders 3D graphics. DrawingSurface is a FrameworkElement, so it can be composed within the Silverlight visual tree like any other control. Since it is a FrameworkElement, DrawingSurface inherits layout properties such as Width, Height, FlowDirection, HorizontalAlignment, and VerticalAlignment and participates in layout like other controls.

DrawingSurface adds one new event, the Draw event. The Draw event handler is where the 3D graphics are composed; world, view, and projection matrixes are updated; and DrawPrimitives is called to make the GraphicsDevice render the graphics on the screen.

The Draw event is raised when the system is ready to draw the next frame. The first event is raised after the DrawingSurface has been added to the visual tree. Subsequent Draw events will happen when the drawing surface is invalidated. The drawing surface can be invalidated by calling either DrawingSurface.Invalidate from the UI thread or DrawEventArgs.InvalidateSurface from inside the event handler, which occurs on the render thread.

If the 3D content is relatively static, InvalidateSurface does not need to be called at the end of the Draw event. But, if the 3D content is non-static, for example if the 3D objects are being animated, then InvalidateSurface can be called at the end of event handler to start the draw cycle again.

The SizeChanged event on the DrawingSurface is a good place to add code for dealing with aspect ratio changes.

Also, the DrawingSurface event will not be called if the content is not visible, such as when the element or parent tree is collapsed or if the graphic device has been removed.

The following example shows how to create a DrawingSurface in XAML. An event handler for the Draw is defined.

<Grid x:Name="LayoutRoot" Background="Black">
        <DrawingSurface Draw="DrawingSurface_Draw" 
                        Height="500" />

The DrawEventArgs class provides two timing properties which can be used to assist with animation calculations. DrawEventArgs.TotalTime represents the total time elapsed since the application was started. The DrawEventArgs.DeltaTime represents the elapsed time since the last draw updated and is measures as the difference between total time samples. For more information, see the animation section in this topic and the Walkthrough: Creating and Animating a 3-D Textured Cube in Silverlight.

The following example shows a partial implementation of a Draw event handler that clears the GraphicsDevice, calls DrawPrimitives, and then invalidates the surface which schedules another Draw event. There are a number of steps which must be completed before graphics will actually render, but this gives you the basics of the Draw event handler.

void DrawingSurface_Draw(object sender, DrawEventArgs e)
    // Steps that need to be completed before rendering can occur:   
    // Create Vertex Data, VertexBuffer, and set VertexBuffer on GraphicsDevice 
    // Setup Vertex and Pixel Shader 
    // Create transformation matrices
    // Clear the GraphicsDevice
    e.GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, 
         Color.Transparent, 1.0f, 0);
    // Draw the graphics 
    // This assumes the above setup steps have been completed
    e.GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, 1);

    // Invalidate the surface which will raise another Draw event.

Basic Steps to Render 3D Graphics (Cheat Sheet)

The following list outlines the minimum steps you will need to do in order to render 3D graphics in your Silverlight applications. The steps are discussed in more details through this article.

  1. Set EnableGPUAcceleration to true in the Silverlight plug-in parameters.

  2. Create a DrawingSurface control in XAML and define a Draw event handler.

  3. Create draw events handler.

  4. Create structure to hold vertex data (includes VertexDeclaration).

  5. Create vertex data for 3D shape.

  6. Create VertexBuffer and use SetData to add the vertex data to the buffer.

  7. Create PixelShader and VertexShader objects.

  8. If your vertex shader requires matrices, create World view, Camera view, and Projection space matrixes.

  9. In the Draw event:

    1. Clear the GraphicsDevice by calling the Clear method.

    2. Set the VertexBuffer on the graphics device using SetVertexBuffer.

    3. Set the VertexShader on the graphics device using SetVertexShader. Use SetVertexShaderConstantFloat4 to pass a matrix parameter to the vertex shader.

    4. Set the PixelShader on the graphics device using SetPixelShader.

    5. Call DrawPrimitives.

    6. You can raise another draw event by calling InvalidateSurface.

  10. To invalidate the surface on the UI thread, call Invalidate.

Illustrates the Silverlight 3D Graphics Pipeline

The Silverlight 3D render pipeline can be broken down into the following steps.

  1. Creating Vertex Data and Primitive Data

  2. Vertex Processing

    1. Vertex Shader

  3. Geometry Processing

    1. Backface Culling

    2. Rasterization

  4. Pixel Processing

    1. Scissor, Stencil, Depth tests

    2. Blending

    3. Texture Surface

    4. Texture Sampling

    5. Pixel Shader

  5. Pixel Rendering

    1. Render pixels to render target


3D objects can be drawn to the screen by breaking shapes into simple triangles. For example, a square can be drawn using two triangles. Meshes of triangles are combined together to create complex shapes.

Points in three-dimensional space are represented as a 3-compoent vector called a vertex. A vertex contains an x, y, and z component. Vertices are connected together to form primitives, such as triangles which are defined with three vertices. Multiple triangles can be combined together into lists or stripes which from a mesh.

Silverlight 3D graphics supports the following primitive types:

Primitive Type



The data is ordered as a sequence of triangles; each triangle is described by three new vertices. Back-face culling is affected by the current winding-order render state.


The data is ordered as a sequence of triangles; each triangle is described by two new vertices and one vertex from the previous triangle. The back-face culling flag is flipped automatically on even-numbered triangles.


The data is ordered as a sequence of line segments; each line segment is described by two new vertices. The count may be any positive integer.


The data is ordered as a sequence of line segments; each line segment is described by one new vertex and the last vertex from the previous line segment. The count may be any positive integer.

The PrimitiveType enumeration lists the primitives.

Coordinate System

The coordinate system defines how points are positioned.

XNA uses a right hand coordinate system. If you use or create your own math functions, you are not locked into using the right-handed coordinate system since Silvelright 3D is coordinate system agnostic.

The right-handed coordinate system has x, y, and z axes. The positive z-axis points toward the observer when the positive x-axis is pointing to the right and the positive y-axis is pointing up.

Vertex Data

The vertices of the geometry are stored as vectors. A VertexDeclaration defines the layout of the vertex data. Vertex information is typically stored in a structure which may include data such as position, color, and texture coordinates.

A VertexBuffer is a data buffer that contains a VertexDeclaration which describes the buffer layout to the graphics device. The vertex buffer contains the vertex data.

The following example shows how to create a simple triangle. An array of the VertexPositionColor structures is created to hold the vertex information. Then a VertexBuffer is created and its data is set to the vertex data.

/// <summary>
/// Creates a vertex buffer containing a single triangle primitive
/// </summary>
/// <returns>A vertex buffer containing a single triangle primitive</returns>
VertexBuffer CreateTriangle()
    VertexPositionColor[] vertices = new VertexPositionColor[3];

    vertices[0].Position = new Vector3(-1, -1, 0); // left
    vertices[1].Position = new Vector3(0, 1, 0);   // top
    vertices[2].Position = new Vector3(1, -1, 0);  // right
    vertices[0].Color = new Microsoft.Xna.Framework.Color(255, 0, 0, 255); // red
    vertices[1].Color = new Microsoft.Xna.Framework.Color(0, 255, 0, 255); // green
    vertices[2].Color = new Microsoft.Xna.Framework.Color(0, 0, 255, 255); // blue

    VertexBuffer vb = new VertexBuffer(

    vb.SetData(0, vertices, 0, vertices.Length, 0);

    return vb;

The following image shows the rendered triangle.

Shows a 3D Graphics Triangle

World Space and Camera Views

In a 3D scene there are typically multiple views that must be maintained. Three common views are the world space, view space, and projection space. These are defined as 4x4 matrixes. It is possible to draw all the 3D objects in device coordinates, in which case the vertex shader simply passes vertex information through with no transformations.

The world space is the main point of reference. It is the space that objects are placed in.

The view space is composed of three components. The position of the camera in world space, the coordinates in world space that the camera is looking at, and the direction that is up. For example, if the first component was 2,5,5, then this indicates that camera is at position (2,5,5). If the second component was 0,0,0, then this indicates that the camera is pointing towards the origin of world space. Finally, if the final component is 0,1,0, then this indicates that the up direction of the camera is along the positive y-axis.

The final space is the projection matrix. This is the actual region, within the view space, that the camera sees. The space is defined as a frustum. A frustum is like a pyramid that has the pointed portion cut off leaving a flat plane. Everything in front of the front plane of the frustum is clipped and everything behind the back plane of the frustum is clipped. So, only the objects within the region between the front and back planes are rendered.

The following illustration shows a frustum.

Illustrates the concept of a frustum

See the Walkthrough: Creating and Animating a 3-D Textured Cube in Silverlight for an example of creating and update the different matrixes.


Silverlight 3D graphics uses the Color structure in the Microsoft.Xna.Framework namespace. Make sure to use the correct Color (and Rectangle) structure, since these names collide with classes already in Silverlight.

Silverlight 3D graphics has full support for color blending.

Surfaces, color, and blends default to pre-multiplied alphas. This is different than the standard Color class in Silverlight. To help convert between colors using traditional color channel values, you can use the FromNonPremultiplied method. The default of BlendState.AlphaBlend uses Blend.One instead of Blend.SourceAlpha.

Shaders are programs that are executed on the graphics hardware rather than the CPU. Shader programs perform operations on the vertex data and pixel data and can be used to create rendering effects.


Instead of using shaders, you can use the XNA effects types in the Microsoft.Xna.Framework.Graphics namespace.

Vertex shaders are run against each vertex. The shader is responsible for transforming the position of the primitives in world space to projection space. The input format for the vertex shader maps to the VertexDeclaration.

Pixel shaders are run against each pixel that is rendered. It takes in per-pixel data, which is passed in as input from the vertex shader, and outputs color data for each pixel.

Silverlight 5 supports Shader Model 2.0 and HLSL byte code. HLSL is the High Level Shading Language for DirectX. For more information on HSLS, see the HLSL section on MSDN.

Shaders must be compiled and added to your Visual Studio project as a Resource. You can compile your shader with the FXC.exe utility which is included as part of the DirectX SDK.

To compile a VertexShader with a source file named Triangle.vs.hlsl, you can use the following command:

Fxc.exe /T vs_2_0 /O3 /Zpr /Fo Triangle.vs Triangle.vs.hlsl

To compile a PixelShader with a source file named Triangle.ps.hlsl, you can use the following command:

Fxc.exe /T ps_2_0 /O3 /Zpr /Fo Triangle.ps Triangle.ps.hlsl

The “/Zpr” flag indicates that the matrices are packed in row-major order. Alternatively, you can omit the “/Zpr” flag and transpose your matrices before setting them as constants.

For more information on using FXC.exe, see the Effect-Compiler section on MSDN.

The following example shows a simple vertex shader that takes a transformation matrix as parameter from the application. The matrix is passed into the shader by using the SetVertexShaderConstantFloat4 method. The VertexData struct maps to the example of the VertexDeclaration used previously in this article.

// transformation matrix provided by the application
float4x4 WorldViewProj : register(c0);

// vertex input to the shader
struct VertexData
  float3 Position : POSITION;
  float4 Color : COLOR;

// vertex shader output passed through to geometry 
// processing and a pixel shader
struct VertexShaderOutput
  float4 Position : POSITION;
  float4 Color : COLOR;

// main shader function
VertexShaderOutput main(VertexData vertex)
  VertexShaderOutput output;

  // apply standard transformation for rendering
  output.Position = mul(float4(vertex.Position,1), WorldViewProj);

  // pass the color through to the next stage
  output.Color = vertex.Color;
  return output;

The following example shows a pixel shader that simply passes color data straight through.

// output from the vertex shader
struct VertexShaderOutput
  float4 Position : POSITION;
  float4 Color : COLOR;

// main shader function
float4 main(VertexShaderOutput vertex) : COLOR
  return vertex.Color;

To use a shader in your application, add the compiled shaders to your Visual Studio project and set the Build Action to Resource.

The following example shows how to load a compiled vertex shader and pixel shader from a resource and how to create the VertexShader and PixelShader objects. Streams are first created to hold the raw shader data and then the static VertexShader.FromStream and PixelShader.FromStream methods are used to instantiate new VertexShader and PixelShader objects.

First the shader variables are created.

VertexShader vertexShader;
PixelShader pixelShader;

Assuming the name of the Silverlight application is TriangleApp, the following code shows how to load a shader from a resource.

Stream shaderStream = Application.GetResourceStream(
        new Uri(@"TriangleApp;component/Triangle.vs", UriKind.Relative)).Stream;

vertexShader = VertexShader.FromStream(
        GraphicsDeviceManager.Current.GraphicsDevice, shaderStream);

shaderStream = Application.GetResourceStream(
        new Uri(@"TriangleApp;component/Triangle.ps", UriKind.Relative)).Stream;

pixelShader = PixelShader.FromStream(
        GraphicsDeviceManager.Current.GraphicsDevice, shaderStream);

Once we have your shaders created, you will need to add them to the GraphicsDevice when you are composing the 3D graphics to be drawn. You will use the SetVertexShader and SetPixelShader methods.

Be sure to use the correct PixelShader class. In Silverlight, there is another class with the name PixelShader in the System.Windows.Media.Effects namespace which is used to create custom effects for WriteableBitmap objects.

The following example shows how to add the shaders to the GraphicsDevice. A VertexShader named vertexShader and PixelShader named pixelShader are added to the GraphicsDevice. The variable viewProjection, which is passed into the shader, is a Matrix.

// Get the GraphicsDevice
GraphicsDevice device = GraphicsDeviceManager.Current.GraphicsDevice;

// Add vertex shader to graphics device

// pass the transform to the shader
// Note, parameter viewProjection is a 4x4 Matrix
device.SetVertexShaderConstantFloat4(0, ref viewProjection);

// Add pixel shader to graphics device 

It is worth pointing out that 3D graphics in Silverlight are not animated using the Silverlight Storyboard objects. Rather, 3D animations are created by changing the transform of an object over time.

The frame rate does not always occur at a consistent frequency, so animation that is tied to frame rate may not render smoothly or predictably. A better way to control timing in 3D animation is to use the delta time between frames to convert from units per frame to units per second. The following formula shows an example of how this can be accomplished.

NewPosition = Position + Direction * Speed * DeltaTime;

The DrawEventArgs, which is passed into the Draw event handler, contains two time related properties: DrawEventArgs.TotalTime and DrawEventArgs.DeltaTime. TotalTime is the time elapsed since the application was started. DeltaTime is the time interval since the last draw update.

See the following Walkthrough: Creating and Animating a 3-D Textured Cube in Silverlight for a detailed sample of animating a 3D cube.

Textures in 3D graphics are rasterized images, such as JPGs and PNGs, which are used to modify the color of a surface. Loading an image is a common way to initialize a Texture. Textures can be set on the GraphicsDevice using the Textures property.

The following example shows how to use the CopyTo method on the BitmapSourceExtensions class (which extends BitmapSource) to convert a PNG image into a Texture. The PNG image is stored as a resource. CubeSample is the name of the project and SLXNA.png is the name of the image.

// Load image
Stream imageStream = Application.GetResourceStream(
    new Uri(@"CubeSample;component/SLXNA.png", UriKind.Relative)).Stream;

BitmapImage image = new BitmapImage();

// Create texture
Texture2D texture = new Texture2D(

// Copy image to texture

See the following Walkthrough: Creating and Animating a 3-D Textured Cube in Silverlight for a 3D sample that uses textures.

Silverlight applications use multiple threads. For example, the main application thread is the UI thread. Silverlight 3D graphic rendering occurs on the composition thread.

It is recommend that you update your data model on the UI thread or a background thread and not on the composition thread. You can then read from the data model in the Draw callback. This helps keep the time spent in the Draw event handler to a minimum which will help performance.

If data needs to updated or sampled atomically, use a lock, but avoid locking the entire callback by caching data locally at the beginning of the callback.

Do not access DependencyObject objects from the Drawevent handler.

The GraphicsDeviceManager class provides functionality for interacting with the GraphicsDevice. The GraphicsDeviceManager class exposes a static property called Current to get an instance of the current GraphicsDeviceManager. The GraphicsDevice property gets an instance of the GraphicsDevice. You can check the hardware rendering support of the graphics adapter with the RenderMode property and get an explanation for the current render mode with the RenderMode property. The RenderModeChangedEventArgs also contains RenderMode and RenderModeReason properties.

The GraphicsDevice class (and the GraphicsDeviceExtensions class) contains methods and properties for interacting with the GraphicsDevice. For example, you can get and set the VertexBuffer, VertexShader, and PixelShader. You can use the Clear method to clear the GraphicsDevice and the DrawIndexedPrimitives and DrawPrimitives methods to draw 3D objects. The GraphicsDevice also contains a number of properties related to blending, depth, stencil, and textures.

There are situations when resources may need to be recovered. If the render mode transitions to Unavailable, certain types of content are removed. This includes all textures, all render targets, and all vertex and index buffers. Shaders are not removed.

Listening for the RenderModeChanged event and restoring these lost resources when the GraphicsDevice transitions out of the Unavailable state can make your applications more resilient. Failing to handle this event will result in rendering blank content for resources that are not reloaded.

The following example sets up an event handler for the RenderModeChanged event.

GraphicsDeviceManager deviceManager = GraphicsDeviceManager.Current;

deviceManager.RenderModeChanged += 
    new EventHandler<RenderModeChangedEventArgs>(deviceManager_RenderModeChanged);

The following example creates an event handler for the RenderModeChanged event and checks if render is transition out of the Unavailable state.

void deviceManager_RenderModeChanged(object sender, RenderModeChangedEventArgs e)
    if (e.OldRenderMode == RenderMode.Unavailable)
        // reload textures, etc.

The following table list the specifications that the Silverlight 3D graphics Reach profile supports.




Windows XP, Windows Vista, and Windows 7 with a DirectX 9 GPU that supports at least Shader Model 2.0.

Shader Model


Maximum texture size


Maximum cube map size


Maximum volume texture size


Maximum number of vertex streams


Maximum vertex stream stride


Index buffer formats

16 bit

Vertex element formats

Color, Byte4, Single, Vector2, Vector3, Vector4, Short2, Short4, NormalizedShort2, NormalizedShort4

Texture formats

Color, Bgr565, Bgra5551, Bgra4444, NormalizedByte2, NormalizedByte4

Vertex texture formats

Vertex texturing is not supported.

Multiple render targets


Occlusion queries


Separate alpha blend



Only for SourceBlend, not DestinationBlend.