The integration of information is essential to measure the exact 2D motion of a surface from both local ambiguous 1D motion produced by elongated edges and local non-ambiguous 2D motion from features such as corners, end-points or texture elements. The dynamics of this motion integration shows a complex time course which can be read from tracking eye movements: local 1D motion signals are extracted first and then pooled to initiate the ocular responses before that 2D motion signals are taken into account to refine the tracking direction until it matches the surface motion direction. The nature of these 1D and 2D motion computations is still unclear. Previously, we have shown that the late, 2D-driven response components to either plaids or barber-poles have very similar latencies over a large range of contrast, suggesting a shared mechanism. However, they showed different contrast response functions with these different motion stimuli, suggesting different motion processing. We designed a two-pathways Bayesian model of motion integration and showed that this family of contrast response functions can be predicted from the probability distributions of 1D and 2D motion signals for each type of stimulus. Indeed, this formulation may explain contrast response functions that could not be explained by a simple bayesian model (Weiss et al., 2002 em Nature Neuroscience bf 5 , 598–604) and gives a quantitative argument to study how local information with different relative ambiguities values may be pooled to provide an integrated response of the system. Finally, we formulate how different spatial information may be pooled and we draw the analogy of this method with methods using the partial derivative equations. This simple model correctly explains some non-linear interactions between neighboring neurons selective to motion direction which are observed in short-latency ocular following and neuro-physiological data.