Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
-
Updated
Jun 24, 2025 - Python
Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
Code associated with the NAACL 2025 paper "COVE: COntext and VEracity prediction for out-of-context images"
FailSafe: An autonomous fact-checking framework leveraging Multi-Agent LLMs and Structured Argumentation Graphs (SAG) to verify claims with deep-web retrieval and reasoning.
This repository provides scripts and workflows for translating fact-checking datasets and automating claim classification using large language models (LLMs).
Code associated with the preprint: "M4FC: a Multimodal, , Multilingual, Multicultural, Multitask real-world Fact-Checking Dataset"
debunkr.org Dashboard is a Browser extension that helps you analyze suspicious content on the web using AI-powered analysis. Simply highlight text on any website, right-click, and let our egalitarian AI analyze it for bias, manipulation, and power structures.
Tathya (तथ्य, "truth") is an Agentic fact-checking system that verifies claims using multiple sources including Google Search, DuckDuckGo, Wikidata, and news APIs. It provides structured analysis with confidence scores, detailed explanations, and transparent source attribution through a modern Streamlit interface and FastAPI backend.
OpenSiteTrust is an open, explainable, and reusable website scoring ecosystem
An advanced AI-powered fake news detection system that verifies text, images, and social media posts using Gemini AI, FastAPI, and Next.js. Includes a modern web interface, a lightweight Streamlit app, and a Chrome extension for real-time fake content detection. Built to combat misinformation with explainable AI results and contextual source links.
🔍 ABCheckers 💬 is a data-driven project that analyzes Twitter discourse to uncover misinformation around 🇵🇭 inflation and the weakening peso, empowering users with contextual insights.
Media Literacy System powered by AI - Analyze news for bias and manipulation.
🛡️ WonderAI: Your digital shield against fake news. Real-time content analysis and fact-checking powered by Advanced LLMs.
Adventure Guardian AI is a unified safety intelligence system designed to protect adventure travellers in India. It verifies trek information, analyzes health risks, and detects fraud using AI-powered vision, geodata, weather intelligence, and pattern analysis. By combining truth, health, and fraud assessments, it generates a single Verified Trek S
Novel multimodal architecture for detecting such misinformation by explicitly modeling the consistency between visual content, textual claims, and external factual knowledge.
AI-powered fake news detection system using advanced NLP, fact verification, and source reliability analysis. Built with Next.js 14, featuring real-time credibility assessment, comprehensive RESTful API, and professional dark/light mode interface for combating misinformation.
This project implements a complete NLP pipeline for Persian tweets to classify topics and detect fake news. Using a Random Forest classifier, it compares tweet content with trusted news sources, achieving 70% accuracy in fake news detection.
Imagine Hashing embeds cryptographic hashes into images using steganography and SHA256 to ensure authenticity, integrity, and resilience against tampering or manipulation.
Watermarking System | AI-Generated Media Detection A system for detecting and flagging AI-generated images using ML and steganography. Ensures authenticity with imperceptible, resilient watermarks embedded at creation.
Fine-tuned roberta-base classifier on the LIAR dataset. Aaccepts multiple input types text, URLs, and PDFs and outputs a prediction with a confidence score. It also leverages google/flan-t5-base to generate explanations and uses an Agentic AI with LangGraph to orchestrate agents for planning, retrieval, execution, fallback, and reasoning.
A transparent, agentic system for multimodal misinformation detection. Verifies text and image authenticity using LLM & VLM agents with explainable reasoning.
Add a description, image, and links to the misinformation-detection topic page so that developers can more easily learn about it.
To associate your repository with the misinformation-detection topic, visit your repo's landing page and select "manage topics."