Environment mapping is crucial for proper decision making in multi-robot systems. We propose multi-aspect mapping technology for a group of mobile robots that allows to perceive surrounding objects both at the level of fine-grained geometry and at the level of their semantic representation. The inputs of the presented mapping subsystem are series of RGB-D streams from multiple robots combined with their navigation information. The primary output is a set of discovered objects associated with their corresponding visual and semantic descriptions. The results of the conducted experiments confirm the sustainability of the proposed approach for exploration of indoor environments.