The deep neural network method of recognizing critical situations for transport systems according to video frames from
the intelligent vehicles cameras is offered, that is effective in terms of accuracy and high-speed performance. Unlike the
known solutions for the objects and normal or critical situations detection and recognition, it uses the classification with
the subsequent reinforcement on the basis of several video stream frames and with the automatic annotation algorithm.
The adapted architectures of neural networks are offered: the dual network to identify drivers and passengers according
to the face image, the network with independent recurrent layers to classify situations according to the video fragment.
The scheme of the intellectual distributed city system of transport safety using the cameras and on-board computers
united in a single network is offered. Software modules in Python are developed and natural experiments are made. The
possibility of the offered algorithms and programs in UGV or in the driver assistant systems implementation is shown
with the illustrating examples in real-time.