In our implementationresearch,we applyworkflow approachto the modeling and development ofthe Big Data processing pipeline usingopen source technologies. The data processing workflow is a set of interrelatedsteps which launch some particular jobssuch as Spark job, shell job or Postgre SQL command. All workflow steps are chained to form integrated process and imitate the data load from staging storage area to the datamart storage area. The experimental workflow-basedimplementationof a data processing pipeline was performed thatstagesthrough different storage areas and uses actual industrial KPIdatasetof some 30 millions records. Evaluation of implementation results provides proofs ofthe applicability of proposed workflow toother application domains and datasets which should satisfy the data format at input stage of the workflow.