ALSO READ: Authorities In South Korea Move To Arrest President Yoon Suk Yeol, Sources Say
A UK-based AI consultancy, Faculty AI, known for its collaborations with the UK government on artificial intelligence safety, the National Health Service (NHS), and education, has come under the spotlight for its involvement in developing AI technologies for military drones.
Faculty AI has established itself as a significant player in the UK’s AI sector, offering services that span various industries. However, its defense-related work has raised ethical and policy concerns, particularly about the potential use of autonomous drones in military applications.
Faculty AI is reportedly developing and deploying AI models for unmanned aerial vehicles (UAVs), according to a defense industry partner. This work includes applications such as subject identification, tracking object movement, and exploring autonomous swarming capabilities.
The company’s partnership with London-based startup Hadean highlights its contributions to advancing drone technologies. While Faculty’s work with Hadean reportedly excludes weapons targeting, the company has not disclosed whether it is developing drones capable of autonomous lethal force, citing confidentiality agreements.
A spokesperson for Faculty defended its practices, stating:
“We help to develop novel AI models that will help our defense partners create safer, more robust solutions. We have rigorous ethical policies and internal processes and follow ethical guidelines on AI from the Ministry of Defence.”
The spokesperson emphasized Faculty’s decade-long expertise in AI safety, including efforts to counter child sexual abuse and terrorism, adding, “We’re trusted by governments and model developers to ensure frontier AI is safe, and by defense clients to apply AI ethically to help keep citizens safe.”
Unlike companies like OpenAI or DeepMind, Faculty does not create its own AI models but instead resells models and provides consulting services to governments and private sectors. Its high-profile projects include data analysis for the Vote Leave campaign during Brexit and significant contributions to the UK government’s COVID-19 response under former Prime Minister Boris Johnson.
Faculty also works closely with the UK government’s Artificial Intelligence Safety Institute (AISI), established in 2023 by then-Prime Minister Rishi Sunak. The institute aims to address safety concerns as AI technologies continue to evolve rapidly.
The potential application of AI in military drones has sparked global debates. Recent advancements in technology have raised concerns about the possibility of drones that could track and eliminate targets without human intervention.
The ethical implications of such autonomous systems have drawn criticism from politicians and experts.
Despite these calls for caution, Faculty’s continued collaboration with the AISI puts it in a strategic position to influence UK policy on AI and military technologies.
In November, the AISI contracted Faculty to study how large language models could be misused for criminal or harmful activities. The institute stated that Faculty would be a “significant strategic collaborator” in its efforts to safeguard AI systems.
Faculty’s spokesperson reiterated the company’s commitment to ethical practices:
“We’ve worked on AI safety for a decade and are world-leading experts in this field. That’s why we’re trusted by governments and model developers.”
Chrystia Freeland, Canada's Deputy Prime Minister, is coming out in top as the most probable…
Tibet experienced 20 earthquakes within 24 hours, starting with a powerful 7.1-magnitude tremor near the…
AAP leaders Sanjay Singh and Saurabh Bhardwaj were detained at police outside the Delhi CM…
Two new judges were sworn in at the Delhi High Court on Wednesday, raising the…
According to Elon Musk's biographer, Seth Abramson, this is a signal of a more serious…
Currently in jail, Rehman, who is also the chairman of the Jamia Alumni Association, was…