The recent breakthrough in text generation ensures that the quality level of generation increases with each
new model. On the other hand, the task associated with the use of generated text is relevant. Spreading false
information, spamming, generating scientific articles and texts are all problems that have arisen from this outburst.
Binary text classification methods have been proposed to control the situation. This research provides an approach
based on aggregating QLoRA adapters which are trained for multiple distributions of generative model families.
Our method LAVA (LLM Adapters for Various dAtasets) demonstrates comparable results with the primary
baseline provided by the PAN organizers. The proposed method provides an efficient and fast detector with high
performance of the target metrics, in view of the possibility of parallel training of adapters for the language
models. It makes detecting process straightforward and flexible to tailor the adapter to appearing distributions
and add it to an existing approach. Furthermore, each learns dependencies separately from the others, after which
the outputs are aggregated.