Our daily endeavors occur in a complex visual environment, whose intrinsic variability shapes the way we integrate information to make decisions. By processing thousands of parallel sensory inputs, our brain is theoretically able to compute the uncertainty of its environment, which would allow it to perform Bayesian integration of its internal representations and its new sensory inputs to drive optimal inference. While there is convincing evidence that humans do compute this sensory uncertainty to guide their behavior, the actual neurobiological and computational principles on which uncertainty computations rely are still poorly understood. Here, we generated naturalistic stimuli of controlled uncertainty and performed a model-based analysis of their electrophysiological correlates in the primary visual cortex. Firstly, we report two layer-specific neuronal responses : infragranular layer neurons were vulnerable to increments of uncertainty, contrarily to supragranular neurons who were resilient, to the point of sometimes reducing uncertainty from the input. Secondly, we used neural decoding to show that these two responses have two different functional population roles: vulnerable neurons encode only the sensory feature (here, orientation) of the input, while resilient neurons co-encode both the sensory feature and its uncertainty. Finally, we implemented a recurrent leaky integrate-and-fire neural network to mechanistically demonstrate that these different types of responses to uncertainty can be explained by different types of recurrent connectivity between cortical neurons. Overall, we provide neurobiological and computational evidences which pinpoint recurrent interactions as the neural substrate of computations on sensory uncertainty. This fits theoretical considerations on canonical microcircuit in the cortex, potentially establishing uncertainty computations as a new general role for local recurrent cortical connectivity.