To date, the amount of textual information is consistently expanding and reaching wider audiences, leading to a rise in spelling and typography errors. This further accentuates the Automatic Spelling Correction problem, which remains one of the primary tasks of Natural Language Processing. At the moment this problem is not widely studied for the Russian language and supposed models often have the strict limitation of the number of errors in the word. This paper presents a model for Automatic Spelling Correction in the Russian language that can handle multiple error cases without limits on the number of errors processed. The model is based on a probabilistic approach and consists of multiple stages, including classification of word correctness, preliminary candidate search with shingle-based approach, source model, error model with the application of bigrams and phonetics. We outline the process of obtaining data from open sources and investigate different methods of constructing and utilising dictionaries. By searching for candidates using a shingle-based approach with no limit on the number of errors, the model is resistant to multiple error cases. The shingle-based search is compared with the fixed cut distance candidate generation approach. We use several test samples and obtain a top-5 F1-score of 0.80 on the real data, which is mostly social media, and 0.91 on the hand-crafted sample with multiple errors.