Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/facepunch/sbox-public/llms.txt

Use this file to discover all available pages before exploring further.

The s&box rendering pipeline starts with a CameraComponent, collects scene objects, runs them through the engine’s layered render passes, and finally applies post-processing effects. This page explains how each piece of the pipeline works and how you can hook into it from C#.

CameraComponent

CameraComponent is the component that drives rendering for a scene. Every active scene needs at least one. It wraps an internal SceneCamera and pushes the scene through the render pipeline each frame.

Key properties

PropertyDefaultDescription
FieldOfView60Horizontal or vertical FOV in degrees (controlled by FovAxis).
ZNear10Near clip plane. Values below 5 produce z-fighting artefacts.
ZFar10000Far clip plane. Balance with ZNear for your game’s scale.
IsMainCameratrueWhether this camera is the primary game camera.
Priority1Higher values render on top when multiple cameras are active.
OrthographicfalseSwitch to orthographic projection. Requires OrthographicHeight.
Viewport(0,0,1,1)Normalised screen rect this camera renders into.
RenderTagsemptyInclude only objects with these tags. Empty = include all.
RenderExcludeTagsemptyExclude objects that have any of these tags.
RenderTexturenullAssign a RenderTextureAsset to render into a texture instead of the screen.

Rendering to a texture

You can redirect a camera to render into any Texture created as a render target.
// Create a render-target texture, then assign it
var tex = Texture.CreateRenderTarget()
    .WithSize( 1024, 1024 )
    .WithFormat( ImageFormat.Default )
    .Create();

Camera.RenderTarget = tex;
You can also call CameraComponent.RenderToTexture to perform an immediate off-screen render without permanently redirecting the camera:
bool success = Camera.RenderToTexture( myTexture );

Coordinate helpers

CameraComponent has several helpers for converting between world space and screen space:
// World position → normalised screen coordinate (0–1)
Vector2 screenNorm = Camera.PointToScreenNormal( worldPos );

// World position → pixel position
Vector2 pixels = Camera.PointToScreenPixels( worldPos );

// Screen pixel → world-space ray
Ray ray = Camera.ScreenPixelToRay( Mouse.Position );

// BBox in world → screen rect
Rect screenRect = Camera.BBoxToScreenPixels( myBBox, out bool isBehind );

// Screen pixel → point on near frustum plane
Vector3 worldPoint = Camera.ScreenToWorld( screenPixel );

Post-processing

Post-processing is enabled by default (EnablePostProcessing = true). You can anchor the post-process volume lookup to a different position than the camera itself:
Camera.PostProcessAnchor = myGameObject; // volumes sampled from this position
Camera.EnablePostProcessing = false;     // disable entirely
The camera automatically collects bloom, auto-exposure, volumetric fog, and cubemap fog from scene volumes and components when ClearFlags includes ClearFlags.Color.
The old render hook API (AddHookAfterOpaque, AddHookAfterTransparent, etc.) is obsolete as of mid-2025. Use CommandList instead for custom render stages.

RenderAttributes

RenderAttributes is a typed key-value store that passes data from C# to shaders. You set values in C# and read them in HLSL using the Attribute(...) annotation.

Setting values

var attrs = new RenderAttributes();

// Primitives
attrs.Set( "MyFloat", 0.5f );
attrs.Set( "MyColor", new Vector4( 1, 0, 0, 1 ) );
attrs.Set( "MyFlag", true );

// Textures
attrs.Set( "MyTexture", someTexture );

// Structured data (constant buffer)
attrs.SetData( "MyStruct", myUnmanagedStruct );

Shader combos

Combos are compile-time shader permutations toggled at runtime:
attrs.SetCombo( "D_TRANSLUCENT", 1 );
attrs.SetComboEnum( "D_MY_MODE", MyModeEnum.Fast );

// Read back
int val = attrs.GetComboInt( "D_TRANSLUCENT" );
bool on = attrs.GetComboBool( "D_MY_FLAG" );

Reading values back

float f    = attrs.GetFloat( "MyFloat", defaultValue: 0f );
Vector3 v  = attrs.GetVector( "MyVec", Vector3.Zero );
Texture t  = attrs.GetTexture( "MyTexture" );
Matrix m   = attrs.GetMatrix( "MyMatrix" );

Accessing attributes inside a render pass

During a render callback, Graphics.Attributes gives you the current context’s attributes. Changes you make here persist only for the current render block:
Graphics.Attributes.Set( "FrameTime", Time.Delta );

Binding in HLSL

// float attribute
float g_flMyFloat < Attribute( "MyFloat" ); >;

// Texture attribute
Texture2D g_tMyTexture < Attribute( "MyTexture" ); SrgbRead( false ); >;

Graphics class

Graphics is a static class that exposes the active render context. It is only valid inside a render callback — accessing it outside one throws an exception.

Viewport and render target

// Read or set the current viewport rect (pixels)
Rect vp = Graphics.Viewport;
Graphics.Viewport = new Rect( 0, 0, 512, 512 );

// Redirect rendering to a texture for this block
Graphics.RenderTarget = myRenderTarget;

// Restore default target
Graphics.RenderTarget = null;

// Clear current target
Graphics.Clear( Color.Black, clearColor: true, clearDepth: true, clearStencil: true );

Camera state

Transform camTx  = Graphics.CameraTransform;
Vector3   camPos = Graphics.CameraPosition;
Rotation  camRot = Graphics.CameraRotation;
float     fov    = Graphics.FieldOfView;
Frustum   f      = Graphics.Frustum;
bool      active = Graphics.IsActive;

Frame and depth texture capture

You can grab a snapshot of the framebuffer or depth buffer and make it available to materials as a named attribute:
// Grab colour buffer (optionally generate mips for blur effects)
RenderTarget frame = Graphics.GrabFrameTexture( "FrameTexture",
    renderAttributes: null,
    downsampleMethod: DownsampleMethod.GaussianBlur,
    maxMips: 4 );

// Grab depth buffer
RenderTarget depth = Graphics.GrabDepthTexture( "DepthTexture" );
Both captured textures are pooled and released automatically at the end of the render scope.

Texture copy

// Same format and size required
Graphics.CopyTexture( srcTexture, dstTexture );

// Copy a specific mip or array slice
Graphics.CopyTexture( src, dst, srcMipSlice: 0, srcArraySlice: 0, dstMipSlice: 0, dstArraySlice: 0 );

Lighting setup

// Populate per-object lighting into target attributes
Graphics.SetupLighting( sceneObject, targetAttributes );

Primitive types

The Graphics.PrimitiveType enum lists all supported draw topologies:
ValueDescription
PointsIndividual point sprites
LinesDisconnected line segments
LineStripConnected line chain
TrianglesStandard triangle list
TriangleStripTriangle strip
*WithAdjacencyVariants with geometry shader adjacency data

Layer types

Graphics.LayerType tells you which render layer is currently executing. The engine calls render callbacks once per layer:
if ( Graphics.LayerType == SceneLayerType.Opaque )
{
    // draw opaque geometry
}
If you are unsure which layer your draw call should live in, use SceneLayerType.Opaque for solid geometry and SceneLayerType.Translucent for anything with alpha.

Next steps

Custom shaders

Write HLSL shaders, compute shaders, and ray-tracing shaders.

UI panels

Build screen and world-space UI with the Panel system.

Build docs developers (and LLMs) love