One of the critical problems in computer modeling is getting numerical results from heavy computational models for large input parameter sets. In particular, this problem arises when researchers need to get accurate results for data visualization or machine learning. For instance, executing a simulation model on thousands of input parameter combinations can take days or weeks. Other tasks include estimating queuing networks properties, solving optimization problems, and various physics and economics models. This article describes an architecture of a distributed system that uses the docker technology for problem execution. The system includes a backend server, SQL and Redis databases, supervisor service and workers, where the problems are computed. Supervisor checks Redis queue for new tasks and distributes them between workers. Any input problem is automatically split into smaller chunks - tasks that workers compute. The user interacts with the system using the web interface written in JavaScript. For problem execution, the user creates a JSON file with problem description and docker image in which tasks will be executed. The system can be deployed in any public cloud. We implemented several different strategies for tasks prioritization and distribution among workers. Numerical results, presented in the paper, demonstrate the influence of the choice of the method of tasks distribution and prioritization on the duration of the computation under moderate workload.