Soar Technology lead scientist Fernando Maymi is one of many cybersecurity luminaries who will be in attendance at Black Hat Europe in London next month. While he’s there he’ll be co-presenting (alongside Soar’s Alex Nickels) a 50-minute Briefing on “How to Build Synthetic Persons in Cyberspace” which promises to be packed with intriguing ideas. Notably, Soar has developed Cyberspace Cognitive (CyCog) agents that can behave like attackers, defenders or users in a network. While many organizations have developed technologies and techniques for replicating enterprise-scale networks, realistically populating those networks with synthetic agents that behave like real people is a thorny challenge — one Maymi thinks Soar has solved.
We caught up with Maymi via email to get a better sense of what Black Hat Europe attendees can expect from this Briefing and to learn more about his own exciting experiences in cybersecurity.
Hey Fernando! Tell us a bit about yourself and your cybersecurity work.
Fernando Maymi: I work at a company in Michigan called Soar Technology, or SoarTech for short. We specialize in researching and developing artificial intelligence (AI) solutions to hard problems in training, unmanned platforms and cyberspace operations. I joined the company two years ago after retiring from the U.S. Army, where I taught cybersecurity at West Point, ran research projects at the Cyber Research Center and led the stand-up of the Army Cyber Institute, which is the Army’s think tank for cyberspace issues.
Through all of this, I’ve learned that if we only surround ourselves with like-minded people we assume huge risks, but if we connect with diverse folks and share information we stand a much better chance. I just got back from Tokyo, where I was running a multi-sector cyber exercise helping prepare for the 2020 Olympics. It was awesome to watch folks from the power and manufacturing and other sectors come together to solve a really challenging scenario. Helping each other out really works!
Without spoiling too much, what are you going to be speaking about at Black Hat Europe this year?
Fernando: My colleague Alex Nickels and I have been involved in three projects aimed at researching and developing different kinds of synthetic autonomous actors for cyberspace. The first one was an autonomous penetration tester for the U.S. Navy. Then we were asked to build a defender against whom human penetration testers could be trained. Finally, DARPA asked us to build high-fidelity models of human users in order to test for vulnerabilities in user behaviors.
We had a head start, because our expertise is in modeling the cognition of expert humans as opposed to building autonomy from the ground up. Along the way, we found a lot of common issues and some really hard challenges. We also realized that autonomous agents will soon become common in cyberspace and that we need to come together as a community to address the security implications of this change—both positive and negative.
Why is this important, and what do you hope Black Hat attendees will learn from it?
Fernando: We are, at best, barely holding the line when it comes to defending our information systems against human adversaries. Once autonomous agents become effective attackers, we will absolutely need some cyber robots on the defensive side as well just to keep up. Even if you don’t buy into the idea that synthetic hackers are coming (and they are), we could really use some breakthroughs in developing autonomous cyber defenders to improve our security posture.
Despite all the hype, artificial intelligence (AI) is still not there yet when it comes to providing this capability. In our talk, we will provide a gentle introduction to AI, describe the state of the art and then show how we have developed some innovative approaches to defending and testing our networks. We also point out where we’ve fallen flat on our faces, talk about why, and provide some thoughts on how we can work together as a community to address some of these shortfalls.
What have you learned about human behavior in the course of trying to emulate it in your family of CyCog agents?
One of the coolest things we did was to gradually change the nature of email messages until we duped a synthetic user into clicking a link that they would not have clicked right off the bat. These agents learn and have biases much like us, so they can fall in the same traps as we do. Another lesson learned was how slow we humans are compared to computers. In order to maintain the appearance of being human, we need to slow our agents down a few orders of magnitude. Most importantly, it is not all that difficult to simulate about 80% of typical human behavior in cyberspace. The other 20%, however, is really really hard, and boils down to the fact that AI systems really just lack plain common sense.
What are you hoping to get out of Black Hat Europe this year?
Fernando: Our biggest hope is to stimulate some thinking, exchange ideas, and maybe meet some people with whom we could collaborate as we tackle the challenges ahead. I think many of us are at risk of buying into the hype about AI and may not realize its limitations and all the challenges that remain ahead of us. For example, behavioral models of the sort that can drive helpful synthetic cyberspace actors are in their infancy. We could really use a community approach to building this knowledge base so that synthetic cybersecurity agents can team with and enhance the performance of us humans. After all, we are in the business of building systems that model human expertise and, since that expertise has to come from somewhere, the more experts the better.