AI as weapons

Featured image by by ZALA Aero Group

AI is being used in many areas and is bringing several advantages and opportunities, however, it is also bringing some threats. One of these is the use of AI as weapons.

AI weapons are programmed to find a class of targets, then select and attack a specific person or object within that class, with little human control over the decisions that are made.

These technologies are not some science fiction scenarios, AI weapons have already been used in the past and are being used right now. In current military conflict between Russia and Ukraine, AI and machine learning technologies can be found in weapons on both sides. Russia is using a lethal »suicide« drone called KUB-BLA (also known as a »loitering munition«), that can operate with little human control and can autonomously identify targets using AI. On the other side, Ukraine is using conversational facial recognition software Clearview AI. Ukraine is using this technology to identify Russian soldiers, saboteurs and spies.

These new technologies are shaking up the way wars are fought. In the future, these new technologies will be even more accessible and will definitely change the nature of future wars.

This development is rising some serious concerns. AI is a tool and this tool can be used for good or for bad. While these technologies pose a danger to our freedom and even life (especially autonomous decisions technologies), the real answer is not to switch them off, but to regulate their use. We must understand how AI technologies are being used and we must reduce unwanted consequences they are bringing. AI technologies should be transparent, provide explainability and retain human oversight.

This is why ethics in AI is important – to ensure that AI technologes maintain human dignity and do not cause harm to people.

Authors: Matej Kovačič


Exclusive: Ukraine has started using Clearview AI’s facial recognition during war,

Russia’s KUB-BLA kamikaze drone intercepted in Ukraine,