During testing in December, a pair of AI programs were fed into the system: the Air Force Research Laboratory’s Autonomous Air Combat Operations (AACO) and the Defense Advanced Research Projects Agency’s (DARPA) Air Combat Evolution (ACE). AACO’s AI agents focused on combat with a single adversary beyond visual range (BVR), while ACE focused on dogfight-style maneuvers with a closer, “visible” simulated enemy.
While VISTA requires a certified pilot in the rear cockpit as backup, during test flights, an engineer trained in the AI systems manned the front cockpit to deal with any technical issues that arose. In the end, these issues were minor. While not able to elaborate on the intricacies, DARPA program manager Lt. Col. Ryan Hefron explains that any hiccups were “to be expected when transitioning from virtual to live.” All in all, it was a significant step toward realizing Skyborg’s aim of getting autonomous aircraft off the ground as soon as possible.
The Department of Defense stresses that AACO and ACE are designed to supplement human pilots, not replace them. In some instances, AI copilot systems could act as a support mechanism for pilots in active combat. With AACO and ACE capable of parsing millions of data inputs per second, and having the ability to take control of the plane at critical junctures, this could be vital in life-or-death situations. For more routine missions that do not require human input, flights could be entirely autonomous, with the nose-section of planes being swapped out when a cockpit is not required for a human pilot.
“We’re not trying to replace pilots, we’re trying to augment them, give them an extra tool,” Cotting says. He draws the analogy of soldiers of bygone campaigns riding into battle on horses. “The horse and the human had to work together,” he says. “The horse can run the trail really well, so the rider doesn’t have to worry about going from point A to B. His brain can be freed up to think bigger thoughts.” For example, Cotting says, a first lieutenant with 100 hours of experience in the cockpit could artificially gain the same edge as a much higher-ranking officer with 1,000 hours of flight experience, thanks to AI augmentation.
For Bill Gray, chief test pilot at the USAF Test Pilot School, incorporating AI is a natural extension of the work he does with human students. “Whenever we [pilots] talk to engineers and scientists about the difficulties of training and qualifying AI agents, they typically treat this as a new problem,” he says. “This bothers me, because I have been training and qualifying highly non-linear and unpredictable natural intelligence agents—students—for decades. For me, the question isn’t, ‘Can we train and qualify AI agents?’ It’s, ‘Why can we train and qualify humans, and what can this teach us about doing the same for AI agents?’
Gray believes AI is “not a wonder tool that can solve all of the problems,” but rather that it must be developed in a balanced approach, with built-in safety measures to prevent costly mishaps. An overreliance on AI—a “trust in autonomy”—can be dangerous, Gray believes, pointing out failures in Tesla’s autopilot program despite Tesla asserting the need for the driver to be at the wheel as a backup. Cotting agrees, calling the ability to test AI programs in the VISTA a “risk-reduction plan.” By training AI on conventional systems such as the VISTA X-62—rather than building an entirely new aircraft—automatic limits and, if necessary, safety pilot intervention can help prevent the AI from endangering the aircraft as it learns.
Source:
www.wired.com
Source link