Cybersecurity Competition Showcases AI’s Prowess and Pitfalls
LAS VEGAS: Recently, a group of seven cybersecurity experts convened on a Friday morning in an opulent suite on the 60th floor of the Cosmopolitan hotel in Las Vegas.
Equipped with laptops, intricate network cables, spare Wi-Fi antennas, and a wall-mounted television functioning as a colossal screen displaying cryptic programming code, these professionals devoted the ensuing two days to infiltrating a computer network in San Antonio. This initiative was part of the annual National Collegiate Cyber Defense Competition.
As this “red team” of seasoned cybersecurity veterans conducted its assault, numerous elite computer science students, stationed in makeshift command centers across the nation, endeavored to thwart their advances.
“Each time we penetrate their systems and exfiltrate data, they incur a penalty in points,” explained Alex Levinson, a leader within the red team. “Our objective is to employ custom malware—something distinctive that they’ve never encountered.”
Hosted by the University of Texas at San Antonio, the event featured ten collegiate “blue teams,” each victorious in a regional contest earlier in the year.
This elaborate setting aimed to replicate the intense arena of cyber warfare and introduced an innovative participant: artificial intelligence. Notably, one of the blue teams consisted solely of AI agents, functioning autonomously.
The competition illuminated both the formidable potential and the limitations of AI systems in the realm of cybersecurity. While they exhibit capabilities in both attacking and defending computer networks, they are not without their shortcomings.
Presently, they fall short of rivaling the expertise of seasoned cybersecurity professionals or even the brightest computer science pupils.
Nonetheless, AI firms relentlessly enhance their technologies. Anthropic recently announced it would restrict the deployment of its advanced AI, Claude Mythos, to a select few trusted organizations, fearing it might offer an advantage to cybercriminals.
Following suit, OpenAI also declared it would limit access to similar technology to a restricted group of partners.
Seated before a glass table in the Cosmopolitan suite, Dan Borges, one of the red team’s veterans, meticulously crafted an expanding list of directives for AI agents operating on his laptop.
As they navigated the San Antonio network, executing tasks on his behalf, Borges orchestrated the next phase of the attack.
A 37-year-old security engineer with experience at Uber and AI startup Scale AI, Borges wore his baseball cap backwards, revealing dark brown hair cascading halfway down his back. Adorning the cap were the words: “Aloha Got Soul.”
That morning, he attempted to introduce nefarious software onto multiple machines within the network. While his agents diligently tackled this repetitive endeavor, he strategized the next stage of the offensive. “They enable me to execute tasks in parallel,” he stated. “I can act swiftly and broadly.”
However, one of his bots unexpectedly began installing malware on his own device, a misguided attempt to comprehend the malware’s functionality. “Absolutely the worst idea I have ever heard,” Borges chuckled, reflecting on the mishap.
Guided by experts like Borges, these technologies can significantly accelerate a myriad of cybersecurity tasks. Nevertheless, he continues to navigate their inherent flaws.
“It’s straightforward to instruct them,” he noted. “Yet, one must consider: What is the optimal method to ensure they fulfill my intentions?”
Meanwhile, two red team members, David Cowen and Evan Anderson, were captivated by the expansive wall-mounted television, casually prompting Claude Code to organize and execute complex maneuvers designated with names such as Project Mayhem.
Their reliance on the technology was so pronounced that they occasionally departed the suite for refreshments, while Claude diligently prodded the Texas network.
Cowen, a cheerful security consultant from Plano, Texas, identifiable by his voluminous gray-brown beard, erupted in laughter each time the AI bots exhibited unexpected behavior.
Anderson, a self-identified hacker sporting sleeve tattoos who operates a Denver security firm, Offensive Context, remained unfazed by the turn of events.
One afternoon, after a lunch excursion, Cowen glanced at the television screen and burst into laughter once more.
During his absence for fried chicken sandwiches, one of his bots discerned that a blue team had uploaded new software onto a machine in San Antonio.
The bot retrieved the software’s default password from a database, breached the system, and disseminated the password among other bots. “Remarkable,” Cowen exclaimed, chuckling. “I was at lunch.”
Yet, Cowen was quick to assert that these bots are only as effective as their operators. He and Anderson maintained strict oversight, meticulously directing their agents towards defined tasks while remaining vigilant for any serious blunders.
As other red team members targeted blue teams composed of college students, Cowen and Anderson confronted a blue team comprised entirely of bots.
In this year’s competition, Anthropic arranged for its AI technology to engage alongside the ten teams of college students.

This automated cyber defense team functioned with minimal involvement from Anthropic personnel. While each collegiate team consisted of eight students, the Anthropic team boasted as many as 32 AI agents.
Ultimately, the bots secured seventh place out of the eleven participating teams, with Dakota State University in South Dakota, a perennial contender, emerging as the inaugural champion.
Source link: Thestar.com.my.





