Some of the advantages of these combat aircraft are their relatively low cost and that they guarantee a reduction in human losses. However, they will require certain restrictions.
The United States’ development of an unmanned fighter aircraft powered by artificial intelligence (AI) has the potential to give its Air Force some advantage in modern warfare scenarios. However, the use of this technology in this type of autonomous systems also raises important ethical concerns, several experts pointed out in dialogue with Fox News Digital.
The US Air Force Research Laboratory has been developing a pilotless, fighter-like aircraft called XQ-58A Valkyrie, capable of completing missions without human intervention thanks to an AI algorithm system. This and other improvements are being carried out by defense contractor Kratos Defense and Security Solutions, creator of the device.
Some of the advantages of the aircraft are its relatively low cost and the fact that its use would guarantee the reduction in human losses in case of conflict. “The ability to use assets more aggressively without the cost or risk of casualties will add enormous capabilities to military planners,” said Christopher Alexander, director of analysis at technology development company Pioneer Development Group.

Although Alexander affirms that the one of them is how much autonomy should be granted to a lethal weapon wielded by AI, especially since the US drone program has been criticized for its history of civilian casualties.
Alexander believes these concerns so far “have been well managed by the Department of Defense.” However, Phil Siegel, an AI expert and founder of the Center for Advanced Readiness and Threat Response Simulation (Captrs), believes that further development in this regard will be necessary if the role and responsibilities of AI-powered aircraft increases. AI. “The key is what objectives we set for technology (…) The question then is how to weigh these objectives when it has multiple functions,” he explains.
Siegel emphasizes the need to provide performance objectives and give instructions to these systems so that they can act in controlled scenarios and in other more complex ones, where they have to make decisions with little information or fewer tools. For example, if your sensors or cameras are disabled or certain data is not available.
In this sense, Steve Fendley, president of the Kratos Unmanned Systems Division, considers that the company has implemented the sufficient measures to avoid accidents and indicated that developing systems will require human participation before they can make certain decisions. “The most important thing to understand is that having certain capabilities does not mean they will be used. It is very easy to implement a system that can deploy weapons without asking anyone. It is also very easy to have a system there that restricts it,” he said.
If you liked it, share it with your friends!
Source: RT