s&box wraps Source 2’s rendering pipeline behind the Graphics static class. You interact with the GPU through draw calls, render targets, and RenderAttributes that feed data into shaders.
Pipeline overview
Each frame, the engine renders the active scene through a layered pipeline:
Opaque pass
All opaque geometry is drawn front-to-back. This is SceneLayerType.Opaque.
Sky pass
The skybox or sky sphere is drawn.
Transparent pass
Translucent objects are drawn back-to-front. This is SceneLayerType.Translucent.
Post-processing
Post-process effects (bloom, tonemapping, etc.) run on the composited image.
UI pass
Screen-space panels are composited on top.
You hook into this pipeline by overriding OnPreRender (runs before the scene is rendered) or by attaching a custom Renderer component.
CameraComponent
The CameraComponent controls the view from which the world is rendered. Attach it to a GameObject to define your camera:
// Access the active scene camera
var cam = Scene.Camera;
// Position and rotation follow the GameObject's transform
cam.WorldPosition = new Vector3( 0, 0, 200f );
cam.WorldRotation = Rotation.FromPitch( -15f );
// Field of view in degrees
cam.FieldOfView = 75f;
// Clipping planes
cam.ZNear = 8f;
cam.ZFar = 32768f;
Render to texture
Render a camera’s output into a Texture for use in materials, minimaps, security cameras, etc.:
var target = Texture.CreateRenderTarget()
.WithSize( 512, 512 )
.WithImageFormat( ImageFormat.Default )
.Create();
cameraComponent.RenderToTexture( target );
Graphics API
The Graphics class is only valid inside a rendering block (i.e. inside OnPreRender or a SceneCustomObject.Render override).
State queries
// Are we currently inside a render block?
bool active = Graphics.IsActive;
// What layer are we rendering? Opaque, Translucent, etc.
SceneLayerType layer = Graphics.LayerType;
// Camera data for the current view
Transform camTx = Graphics.CameraTransform;
Vector3 camPos = Graphics.CameraPosition;
Rotation camRot = Graphics.CameraRotation;
float fov = Graphics.FieldOfView;
Frustum frustum = Graphics.Frustum;
// Viewport rect in pixels
Rect viewport = Graphics.Viewport;
Grab frame textures
You can sample the colour or depth buffer mid-pass to drive post-processing or refraction effects:
// Store the current colour buffer in the "FrameTexture" render attribute
var rt = Graphics.GrabFrameTexture( "FrameTexture", Graphics.Attributes );
// With a Gaussian blur mip chain (useful for blurred refraction)
var rtBlurred = Graphics.GrabFrameTexture(
"FrameTexture",
Graphics.Attributes,
DownsampleMethod.GaussianBlur );
// Depth buffer
var depthRt = Graphics.GrabDepthTexture( "DepthTexture", Graphics.Attributes );
Clear and render targets
// Clear with a colour
Graphics.Clear( Color.Black );
// Clear only colour, leave depth
Graphics.Clear( Color.Transparent, clearColor: true, clearDepth: false );
// Redirect rendering to an off-screen texture
Graphics.RenderTarget = myRenderTarget;
Graphics.Clear( Color.Transparent );
// ... draw things ...
Graphics.RenderTarget = null; // restore default
Copy textures
// GPU-side copy — format and size must match
Graphics.CopyTexture( srcTexture, dstTexture );
// Copy a specific mip/array slice
Graphics.CopyTexture( src, dst, srcMipSlice: 0, srcArraySlice: 0,
dstMipSlice: 1, dstArraySlice: 0 );
Lighting setup
When writing a custom renderer, call this to populate per-object lighting attributes:
Graphics.SetupLighting( sceneObject, Graphics.Attributes );
RenderAttributes
RenderAttributes are a typed key-value store passed to shaders. Set values in C# and read them in HLSL via the Attribute() annotation.
Setting values
var attrs = Graphics.Attributes;
attrs.Set( "Tint", Color.Red );
attrs.Set( "Roughness", 0.5f );
attrs.Set( "UVScale", new Vector2( 2f, 2f ) );
attrs.Set( "WorldOffset", new Vector3( 0, 0, 10f ) );
attrs.Set( "AlbedoTex", myTexture );
// Shader combos (static branches)
attrs.SetCombo( "F_EMISSIVE", true );
attrs.SetComboEnum( "D_BLEND_MODE", BlendMode.Additive );
// Constant buffer data (raw structs)
attrs.SetData( "PerObjectData", myStruct );
Reading values back
float roughness = attrs.GetFloat( "Roughness", defaultValue: 1f );
bool emissive = attrs.GetComboBool( "F_EMISSIVE" );
HLSL shaders
Create a .shader file in your project. The engine uses the Source 2 HLSL dialect with a few extensions.
HEADER
{
Description = "Custom unlit shader";
}
MODES
{
VrForward();
Depth();
ToolsVis( S_MODE_TOOLS_VIS );
}
COMMON
{
#include "system.fxc"
#include "common.fxc"
}
VS
{
#include "common.vs.hlsl"
PixelInput MainVs( VertexInput i )
{
PixelInput o = ProcessVertex( i );
return FinalizeVertex( o );
}
}
PS
{
#include "common.ps.hlsl"
// Bind to RenderAttributes keys
float4 g_vTint < Attribute( "Tint" ); Default4( 1, 1, 1, 1 ); >;
float g_flRoughness < Attribute( "Roughness" ); Default( 1.0 ); >;
Texture2D g_tAlbedo < Attribute( "AlbedoTex" ); SrgbRead( true ); >;
SamplerState g_sSampler < Filter( ANISO ); AddressU( WRAP ); AddressV( WRAP ); >;
// Combo (static branch)
StaticCombo( F_EMISSIVE, 0..1, Sys( ALL ) );
float4 MainPs( PixelInput i ) : SV_Target0
{
float4 albedo = Tex2D( g_tAlbedo, g_sSampler, i.vTextureCoords.xy );
float4 result = albedo * g_vTint;
#if ( F_EMISSIVE )
result.rgb *= 2.0;
#endif
return result;
}
}
Using a custom shader
var mat = Material.Load( "materials/my_custom.vmat" );
var attrs = new RenderAttributes();
attrs.Set( "Tint", Color.Red );
attrs.SetCombo( "F_EMISSIVE", true );
// Pass attributes when creating a SceneObject, or override per-render
sceneObject.Attributes.Set( "Tint", Color.Blue );
Post-processing
Attach a PostProcessComponent (or inherit from it) to the camera’s GameObject to apply full-screen effects:
public sealed class VignetteEffect : PostProcessComponent
{
[Property, Range( 0, 1 )] public float Intensity { get; set; } = 0.5f;
protected override void OnPreRender()
{
// Push per-frame attributes into the post-process pass
Attributes.Set( "VignetteIntensity", Intensity );
}
}
The corresponding shader samples FrameTexture provided by Graphics.GrabFrameTexture:
PS
{
Texture2D g_tFrameBuffer < Attribute( "FrameTexture" ); >;
float g_flVignetteIntensity < Attribute( "VignetteIntensity" ); Default( 0.5 ); >;
float4 MainPs( PixelInput i ) : SV_Target0
{
float2 uv = i.vTextureCoords.xy;
float4 color = Tex2D( g_tFrameBuffer, g_sSampler, uv );
float2 center = uv - 0.5;
float vign = 1.0 - dot( center, center ) * g_flVignetteIntensity * 4.0;
color.rgb *= saturate( vign );
return color;
}
}
Post-process components are ordered by ZIndex. Lower indices run first. Use this to chain effects in the correct order (e.g. tonemapping before bloom).