The current paper addresses the problem of object identification from multiple 3D partial views, collected from different view angles with the objective of disambiguating between similar objects. We assume a mobile robot equipped with a depth sensor that autonomously grasps an object from different positions, with no previous known pattern. The challenge is to efficiently combine the set of observations into a single classification. We approach the problem with a sequential importance resampling filter that allows to combine the sequence of observations and that, by its sampling nature, allows to handle the large number of possible partial views. In this context, we introduce innovations at the level of the partial view representation and at the formulation of the classification problem. We provide a qualitative comparison to support our representation and illustrate the identification process with a case study.