We address the problem of learning over multiple inter-
dependent temporal sequences where dependencies are
modeled by a graph. We propose a model that is able
to simultaneously fill in missing values and predict future
ones. This approach is based on representation learning
techniques, where temporal data are represented in a latent
vector space so as to capture the dynamicity of the process
and also the relations between the different sources. Infor-
mation completion (missing values) and prediction are then
performed on this latent representation. In particular, the
model allows us to perform both tasks using a unique for-
malism, whereas most often they are addressed separately
using different methods. Moreover, the models allows us
to deal with heterogeneous information (labels and real val-
ues) at the same time. The model has been tested for a con-
crete application: car-traffic forecasting where each time
series characterizes a particular road and where the graph
structure corresponds to the road map of the city. We com-
pare our method with different baselines for both comple-
tion and prediction on two large datasets and show the abil-
ity of our technique to jointly solve these problems. Note
that the model is general and can be used as well in many
different application fields.