The pace of advancement in media generation and manipulation technologies is staggering. Consequently, purely statistical detection methods are no longer adequate for identifying falsified media. Detection techniques that rely on statistical fingerprints can often be deceived with minimal additional resources (e.g., algorithm development, data, or computing).
The Defense Advanced Research Projects Agency's (DARPA) Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms that determine if multi-modal media assets have been generated or manipulated. Attribution algorithms infer whether multi-modal media originates from a particular organization or individual. Characterization algorithms reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies can help detect, attribute, and characterize adversary disinformation campaigns.
For more information about the SemaFor program, please visit DARPA's website or contact us at semafor@darpa.mil.
The AI Forensics Open Research Challenge Evaluations (AI FORCE) are a series of publicly available challenges related to generative AI capabilities. Coinciding with DARPA's goal to develop techniques to mitigate the threats posed by state-of-the-art AI systems, the AI FORCE challenges will help DARPA and the broader community work together in research that will result in safe, secure, and trustworthy AI systems.
This website is not a Department of Defense (DoD) or Defense Advanced Research Projects Agency (DARPA) website and is hosted by a third-party non-government entity. The appearance of the DARPA logo does not constitute endorsement by the DoD/DARPA of non-U.S. government information, products, or services. Although the host may or may not use this site as additional distribution channels for information, the DoD/DARPA does not exercise editorial control over all information you may encounter.