Google’s AI Tool ‘Big Sleep’ Flags 20 Security Flaws in Open-Source Software Without Human Input

Google has recently made an announcement, informing the netizens about 20 security vulnerabilities discovered by Big Sleep in their popular open-source software. The company shared the information as a part of its strategic decision to use AI for cybersecurity research. The announcement was made by Heather Adkins, Google’s Vice President of Security, via a post on the social media platform X.
Google AI Tool Indicates Issues In Popular Tools
Big Sleep is an AI tool developed in collaboration between Google DeepMind and Project Zero. The AI tool has detected several flaws in the systems like ImageMagick and FFmpeg, which are highly used for multimedia processing, and handling various media formats like images, videos, and audio.
However, the technical details of the vulnerabilities are yet to be disclosed by Google; it is confirmed that the AI solution located as well as reproduced the flaws without any human assistance. After the identification, a security analyst also reviewed and verified the claims before the company made the same claim on social media.
Working Mechanism of Big Sleep
The AI system actively simulates the malicious user behavior along with scanning the software code to identify the weak points. The core competency of the system is that it continuously adapts the new methods to uncover complex issues and identify vulnerabilities. The 20 flaws discovered by Big Sleep were spread across the internal platform of Google and several other open-source projects.
Google has made it clear that the purpose behind making the AI tools this capable is not to replace the human security researchers but to assist them in their tasks. The AI-powered cybersecurity tools are capable of conducting more than a thousand test cases faster than humans, sparing enough time for cybersecurity teams to focus more on strategic decision-making.
Big Sleep is a popular but not the only name known for its cybersecurity capabilities. Other AI tools like XBOW and RunSybil have also been effective in terms of identifying bugs in technological architectures. XBOW has even secured a place on the HackerOne bug bounty platform in the United States.
There is no doubt that AI tools are redefining the way vulnerabilities are identified and mitigated, but the concerns remain untouched. For instance, many developers have warned about the increase in inaccurate bugs reported by AI tools by referring to the term “AI slop”. Most often, these types of bugs are imaginary flaws.
Vlad Lonescu, co-founder of RunSybil, has praised Big Sleep by calling it a serious project backed by the right expertise and infrastructure.