Recent research has raised serious concerns about the reliability of passwords generated by large language models, highlighting structural weaknesses that make them far less secure than they appear. Studies conducted independently by AI security firm Irregular and cybersecurity company Kaspersky found that leading AI models consistently produce passwords that follow predictable patterns rather than true randomness. While these passwords often meet conventional complexity requirements such as length, mixed characters, and symbols, their underlying predictability makes them significantly easier for attackers to guess, even when standard security tools rate them highly.
Irregular’s findings demonstrate how repetition and pattern bias undermine the effectiveness of AI generated credentials. In controlled testing, the firm prompted a leading model multiple times to create passwords and observed a limited pool of outputs with notable repetition. One specific password appeared in more than a third of the attempts, a frequency that would be statistically negligible if the generation process were truly random. This behavior indicates that the model is not producing new, independent sequences but instead drawing from learned patterns embedded during training. These patterns reflect common human tendencies in password creation, such as starting with uppercase letters, clustering numbers in certain positions, and ending with familiar symbols.
The issue is rooted in the fundamental design of large language models, which prioritize generating the most probable sequence of characters based on context rather than ensuring equal probability across all possibilities. This contrasts sharply with cryptographic standards such as those defined by NIST, where secure password generation relies on uniform randomness through cryptographically secure pseudorandom number generators. As a result, the entropy of AI generated passwords falls far below expected levels. Research measuring character distribution shows that while a truly random 16 character password could achieve close to 98 bits of entropy, outputs from AI models may deliver only a fraction of that, sometimes as low as 20 to 30 bits. Despite this gap, commonly used password strength meters continue to overestimate their security due to reliance on outdated evaluation methods.
Further analysis by Kaspersky reinforces these findings by identifying consistent biases in character selection across multiple AI systems. Different models exhibit preferences for certain letters and symbols, creating identifiable patterns that attackers can exploit. Testing also revealed that adjusting generation settings such as temperature does not significantly improve randomness, as these biases are embedded within the model itself. This allows adversaries to construct targeted attack strategies by focusing on the most likely outputs rather than attempting exhaustive brute force methods. In practical terms, this reduces the effort required to crack such passwords by several orders of magnitude, with a high percentage of tested AI generated credentials failing under optimized attacks.
The risk extends beyond individual users to enterprise environments where AI coding agents are increasingly integrated into development workflows. These systems can autonomously generate and embed credentials into codebases and infrastructure, often without detection by existing security tools. Traditional secret scanning solutions are not designed to identify AI generated patterns, leaving organizations exposed to hidden vulnerabilities. As adoption of generative AI continues to expand, the findings underscore the need for stronger safeguards, including the use of verified randomness sources, stricter credential management practices, and enhanced monitoring of automated development pipelines.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.





