# Benefits of DXR 1.1 and the RayQuery type:

#

# Seamless integration into pre-existing graphics pipelines by binding the acceleration structure resource to any shader stage and using the RayQuery object to traverse it.

# Can be added to any shader stage but is most useful in a compute or pixel shader.

#

# Does not require state object and shader table management at the API level (anyhit, closesthit, raygen or miss).

#

# Removes ray recursion.

#

# Conveniently delineates ray tracing methods by initializing each RayQuery object with compile time flags indicating a type of ray.

#

# Lower level management of the ray traversal itself by returning non opaque or intersection geometry in the shader.

#

#

#

# Getting Started with DXR 1.1:

#

# There is a great primer on DXR 1.1 and RayQuery by Amar Patel: https://devblogs.microsoft.com/directx/dxr-1-1/.

#

# Prereqs for building and running DXR 1.1:

#

# Windows 10 Build 19035 codename Vibranium that supports DXR 1.1. If you don't want to update to the latest Windows 10 Vibranium build you can always just pull down the standalone dxil compiler and generate dxil from some of the examples in this write up.

#

# Latest dxil compiler here that supports DXR 1.1 RayQuery intrinsics which you can either build from source here https://github.com/microsoft/DirectXShaderCompiler

#

# Or grab per commit binaries here https://ci.appveyor.com/project/antiagainst/directxshadercompiler/builds/31331168/artifacts

#

# Currently there aren't any drivers in the wild that support DXR 1.1 but hopefully in the near future AMD and Nvidia will soon support this.

# Nvidia plans to support AddToStateObject which is a subset capability of DXR 1.1 so I can see it being available soon from them. https://news.developer.nvidia.com/dxr-tier-1-1/.

# From the AMD perspective DXR 1.1 will also be supported in the future and will be used in some fashion to accelerate the next gen xbox and playstation's ray tracing capabilities. https://www.pcgamesn.com/amd/ray-tracing-microsoft-dxr-co-developed

#

#

#

#

# Generating shadows using DXR 1.1:

#

# So normally to create shadows using rasterization we would need to first do a depth only render pass based on the directional light source's position and direction. Then in a deferred pixel shading pass we would use the light depth buffer to determine if the current pixel's depth from the reference frame of the light source is less than the light's depth.

# Well with DXR 1.1 we can do a similar thing but remove the early rasterization light depth pass and instead trace a ray from the pixel's position to determine if the pixel needs to be shadowed. In order to know where to shoot the ray we first need to do a depth only pass from the camera's perspective which is done anyways in most deferred rendering engines. Then later in the deferred pixel shading pass we take the camera's perspective depth and reconstruct the pixel's world space position by taking the inverse view projection to convert the pixel from clip space to world space. Now we know the origin of the ray and we can easily calculate the direction vector by taking the difference between light source's position and the pixel world space position. We then launch the ray using first hit end search because if any geometry interrupts the ray's transmission then we know the pixel is in the shadows. Now we will take a look at the hlsl shader code required to introduce RayQuery into the pixel shader.

#

#

#

#

# Integrating DXR 1.1 RayQuery into a deferred pixel shader pass:

#

# So first we introduce the RayQuery instantiation itself:

RayDesc ray;

ray.Origin = origin;

ray.Direction = rayDir;

ray.TMin = 0.1f;

ray.TMax = 100000.0f;

RayQuery<RAY_FLAG_ACCEPT_FIRST_HIT_AND_END_SEARCH> rayQuery;

rayQuery.TraceRayInline(rtAS, 0, ~0, ray);

RayQuery traversal loop management:

// Transparency testing

while (rayQuery.Proceed() == true) {

// If a triangle is non opaque then manually process the candidate

if (rayQuery.CandidateType() == CANDIDATE_NON_OPAQUE_TRIANGLE) {

// Candidate instance id is a user marker to indicate specific sub meshes of a geometry

if (rayQuery.CandidateInstanceID() == 1) {

float2 barycentrics = rayQuery.CandidateTriangleBarycentrics();

// Fetch uv buffer and interpolate uv coordinates using barycentrics

float2 texCoord = GetTexCoord(rayQuery.CandidateTriangleBarycentrics(),

rayQuery.CandidateInstanceID(),

rayQuery.CandidatePrimitiveIndex());

// Test to see if the texture indicates transparency at the ray intersection point

float alphaValue = transparencyTexture1.SampleLevel(textureSampler, texCoord, 0).a;

// Indicate ray triangle intersection only if the texture indicates an opaque hit

// This test is done for triangles that represent foliage/leaves

if (alphaValue > 0.1) {

rayQuery.CommitNonOpaqueTriangleHit();

}

}

}

// Process the opaque hit which is similar to closest hit shading with DXR 1.0

if (rayQuery.CommittedStatus() == COMMITTED_TRIANGLE_HIT) {

// Calculate hit position in world space

float3 hitPosition = rayQuery.WorldRayOrigin() + (rayQuery.CommittedRayT() * rayQuery.WorldRayDirection());

// Transform pixel hit location to the light's space

float4x4 lightViewProj = mul(lightViewMatrix, lightProjectionMatrix);

float4 clipSpace = mul(float4(hitPosition, 1), lightViewProj);

// Z component is the light source's distance to triangle hit location

rtDepth = clipSpace.z;

}

#

#

#

#

#

# DXR 1.1 shader intrinsics:

#

# RayQuery<RAY_FLAGS>: Ray Query object that can be initialized with compile time ray flags to allow compilers to optimize traversal specific to this query.

#

# TraceRayInline: Used to initialize the RayQuery's acceleration structure, additional ray flags and define the ray parameters including the origin, direction and length of the ray.

#

# Proceed: Executes traversal of the RayQuery and returns true if the acceleration structure contains non opaque triangles or procedural geometry. When the ray intersects with one of these special types of geometry then it is the shader's responsibility to handle the candidate. The candidate's information is easily accessed by the getter functions and if the candidate is deemed a valid hit then the shader promotes the candidate to a committed hit by calling CommitNonOpaqueTriangleHit. This type of processing is analogous to anyhit/intersection shader stages of DXR 1.0.

#

#

# UV coordinate calculation:

#

# When a non opaque triangle is presented as a candidate we need to know the exact point on the triangle which is intersected by the ray. CandidateTriangleBarycentrics() gives us the x and y barycentic coordinates along the two vectors that define the triangle starting at vertex 0 of the triangle. Basically if the x and y component are both 0 then the ray has intersected in the corner of the triangle very close to vertex 0. Just like we can extract a point on the plane knowing the three vertices of the triangle we can apply the same technique to extracting the UV coordinates by interpolating along the barycentric coordinates. So in order to do this we need to supply the shader with a UV coordinate buffer and be able to extract the correct UVs by indexing into the buffer using the CandidateInstanceId and the CandidatePrimitiveIndex.

#

float2 GetTexCoord(float2 barycentrics, uint instanceContribution, uint primitiveIndex){

float2 texCoord[3];

texCoord[0] = geometryBuffer.Load((primitiveIndex * 3) + 0).uv;

texCoord[1] = geometryBuffer.Load((primitiveIndex * 3) + 1).uv;

texCoord[2] = geometryBuffer.Load((primitiveIndex * 3) + 2).uv;

return (texCoord[0] + barycentrics.x * (texCoord[1] - texCoord[0]) + barycentrics.y * (texCoord[2] - texCoord[0]));

}

Mip calculation for texture loads:

Mip levels are typically calculated by the hardware in a pixel shader but because we are intersecting with arbitrary geometry using rays then there is no guarantee that neighboring quad pixels will have the correct information to correctly calculate mip levels. One way to do this is to manually calculate the mip level by factoring in the distance the ray traveled and the angle between the normal of the intersected triangle and the ray's direction. In order to do this we must also use similar techniques as discussed prior by finding the correct triangle's normal data by indexing into the geometry buffer. Then take the dot product of the ray direction and the triangle normal and scale that with the distance the ray traveled to hit the triangle. Then if the value is normalized between 0 and 1 then you can multiple that value by the mip count of the texture and get a somewhat accurate mip level.

float3 worldSpaceNormal = normalize(GetNormal(rayQuery.CommittedPrimitiveIndex()));

// Select mip levels 0 - 7 based on the incident angle of the ray upon the triangle's normal

// and the distance the ray traveled to hit the triangle

float rayDistanceScale = rayQuery.CommittedRayT() / (ray.TMax - ray-TMin);

float cosAngle = 1.0f - dot(ray.Direction, worldSpaceNormal);

// Uses 8 mip levels for the texture and scale distance and light angle in equal portions

// Not sure if this is the best technique but it is a simple way to approximate

uint mipSelection = (cosAngle/2.0f + rayDistanceScale/2.0f) * 7.0f;

// Use SampleLevel to indicate calculated mip level

rtDepth = transparencyTexture1.SampleLevel(textureSampler, texCoord, mipSelection).rgb;