We will now define a probabilistic framework for the representation of predictive motion information.
- (prediction) First, we will define motion-based prediction which is adapted to smooth trajectories such as are observed in natural environments
- (markov) Seeing this prediction as a prior on motion transition in the probabilistic domain, we then show how information may be pooled using a Hidden Markov Model, merging current information and measurement likelihood
-
(SMC) Differently to neural approximations that were used previously, we use a particle filtering method to implement this functional model. Prediction will now take the form of an anisotropic, context-dependent diffusion of local information.
But first, how do we define prediction?
-
In computer vision, one can define the prediction of the trajectory of some distinguished feature of the image (e.g., a local extremum of luminance) using its motion. here we define the feature to track as the motion at a given position and define Motion-based prediction by modelling smoothness in the trajectory of motion:
- knowing the velocity $V_{t-dt}$ at one position $x_{t-dt}$, one expects that velocity will be transported to a new location, that is $x_{t} pprox x_{t-dt} + V_{t-dt} dt$ and approximately conserved in amplitude and direction $V_t pprox V_{t-dt} $. Nothing new: if we go to infinitesimal $dt$, this equation is the advection term in the navier-stokes equations.
- This "approximately" may be included in our generative model by some random variable. different values for the variance of this state noise quantify the strength or precision of the prediction... it shapes the association fields in motion space similarly with what happens in the space domain with association field in the orientation domain of static images.
-
Note that contrary to classical models, position becomes an unknown variable instead of being a parameter as in OF. note also that it is auto-referent since prediction is based itself on motion %: neural = Series / models Grzywack, Burgi et al = "motion tracking"
-
then, we may use a classical Hidden Markov Model to recurrently update knowledge. This uses three steps knowing the probabilistic motion information at a current time and for simplicity I will just represent local probabilistic motion representations:
-
prediction of motion thanks to the generative model as a prior of motion transition and an integral over all possible motions. note that prediction is always diffusive
- estimation of the likelihood of the predicted state,
- merging both, we updated our knowledge from time $ t-dt$ to $ t$
-
repeating the operation, we define a dynamical system.
-
this theory seems easy on the paper, and there is nothing new here in the definition of the dynamical system . extit{BUT} ...... I have bad news: it is extremely hard to represent \& implement on classical machines: if we allow ourself a rather modest discretisation if the probability space (position and velocities), the very high dimensionality of the problem makes it tedious to represent ($2e9$ dimensions in the previously shown POF)...