Im trying to write a snow shader with displacement and secondary camera to render depth to RenderTexture. But it clears every frame and snow tracks are not rendering because of that.unity docs says that there is no simple way around it. Im using URP version 12.1.8
As far as i understand, to modify clear logic i need to write my own SRP, but i really would not want that, especially because in built-in render pipeline this problem was solved with a single checkbox.
I've tried creating a compute shader to add data from camera's rt to a separate rt, but it feels wrong so i would like to know: is there a simpler way?
Also here is the exact line from UniversalRenderPipeline.cs which restricts Base camera from not clearing the texture by changing clearDepth value, so i wonder can i copy all code from the original URP and change just that line?
if (isSceneViewCamera)
{
cameraData.renderType = CameraRenderType.Base;
cameraData.clearDepth = true;
cameraData.postProcessEnabled = CoreUtils.ArePostProcessesEnabled(camera);
cameraData.requiresDepthTexture = settings.supportsCameraDepthTexture;
cameraData.requiresOpaqueTexture = settings.supportsCameraOpaqueTexture;
cameraData.renderer = asset.scriptableRenderer;
}
else if (additionalCameraData != null)
{
cameraData.renderType = additionalCameraData.renderType;
cameraData.clearDepth = (additionalCameraData.renderType != CameraRenderType.Base) ? additionalCameraData.clearDepth : true;
cameraData.postProcessEnabled = additionalCameraData.renderPostProcessing;
cameraData.maxShadowDistance = (additionalCameraData.renderShadows) ? cameraData.maxShadowDistance : 0.0f;
cameraData.requiresDepthTexture = additionalCameraData.requiresDepthTexture;
cameraData.requiresOpaqueTexture = additionalCameraData.requiresColorTexture;
cameraData.renderer = additionalCameraData.scriptableRenderer;
}
i ended up using compute shader to add depth from current frame to RT
input texture is camera target texture(make sure that its depth only)
result must be rgb texture