A cloud computing system that receives complex user tasks involving several subtasks is studied from the point of view of the response time. In order to reduce the service time, the tasks are divided into smaller components and processed in parallel. As a cloud center model we use a fork-join queuing system with Pareto distribution of the service time on the servers. To analyze the mean response time and its standard deviation, a new approach is used combining simulation modeling with one of the machine learning methods. The estimates obtained are much more accurate than the earlier analytical results on fork-join systems.