Skip to content

YRL-AIDA/rag_test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG testing

This repository provides a framework for evaluating and comparing different retrieval strategies for RAG systems, using the LongDocURL benchmark

The primary goal is to analyze how different document parsing and retrieval methods impact the quality of LLM responses on documents.

Tested Dataset Overview

Dataset was created by selecting all Understanding and Locating questions with evidence elements being Text and Layout.

Dataset

Tested Strategies

  • Questioning without any retrieved data.

PureLLM

  • Questioning using cut-off paradigm from LongDocURL.

PyMuPDFPartial

  • Questioning using PyMuPDF based classic RAG algorithm with 500 chunk size and 100 overlap.

PyMuPDFFull

  • Questioning using MinerU based RAG algorithm.

MinerU

Dependencies & Versions

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages