We refine a method for describing and evaluating a previously proposed process of
studying an abstract environment by a system (robot). In the process, we do not model any
biological cognition mechanisms and consider the system as an agent (or a group of agents)
equipped with an information processor. The robot (agent) makes a move in the environment,
consumes information supplied by the environment, and gives out the next move (thus, the process
is considered as a game). The robot moves in an unknown environment and should detect new
objects located in it and recognize them. In this case, the system should build comprehensive
images of visible things and memorize them if necessary (and it should also choose the current
goal set). The main problems here are object recognition and the assessment of information
reward in the game. Thus, the main novelty of the paper is a new method of evaluating the
amount of visual information about the object as the reward. In such a system, we suggest using
a minimally pre-trained neural network to be responsible for the recognition: at first, we train the
network only for Biederman geons (geometrical primitives). Training sets of geons are generated
programmatically and we demonstrate that such a trained network recognizes geons in real objects
quite well. Sets of geons connected with objects (schemes) are used as the rewards.We also expect
to generate procedurally new objects from geon schemes obtained from the environment in the
future and to store them in a database.