The article deals with the national security implications generated by the weaponisation of AI-manipulated digital content. In particular, it outlines the threats and risks associated with the employment of hyper-realistic synthetic video, audio, images or texts –generally known as ‘synthetic media’ or ‘deepfakes’– to compromise (influence) targeted decision-making processes which have national security relevance. It argues that synthetic media would most likely be employed within information and influence operations targeting the public opinion or predefined social groups. Other potential national security relevant targets (specific individuals or organisations that have responsibility for national security) should be adequately equipped to deal with the threat in question (and therefore have appropriate procedures, technologies and organisational settings to deal with the threat).
Read more here.