Currently, more efforts are being made to improve the capabilities of Large Language Models than to address their implications. Modern language models are capable of generating texts that appear indistinguishable from those written by human experts. While providing a high quality of life, such breakthroughs at the same time pose new challenges in education, science and social media. In addition, existing approaches to detect texts created by artificial intelligence either require high computational cost or access to the internal computation of LLMs, which in turn hinders their public availability. Based on these considerations, this paper presents a new paradigm for detecting texts created by artificial intelligence based on on collecting preliminary token statistics and computing n-gram perplexity features. On the combination of HC3, M4GT and MAGE datasets it shows a speedup of 2x over existing approaches with a quality drop around 5%. Moreover, the combination of methods achieves the best quality. This strikes a balance between computational cost, accessibility and performance.