So perplexity can kind of weakly analyze the first few pages of small file size pdfs one at a time, but I’d love to have something that would allow me to upload several hundred research papers and textbooks that could then be analyzed for consensus and contradictions and give me more meaningful search results and summaries than keyword searching alone. Does anything like this exist in a fairly user friendly accessible format?
Afforai might be able to do stuff like this. I haven’t tested it myself yet, but the service also seems to have some other features that might be relevant for your use case.
Wow. Yes, this looks spot on thanks! Warning, Whenever I find cool services like this it seems like they tend to go under within a year or two, so apologies in advance.
Don’t have an answer but I’d be interested in something like that too. I know Microsoft released a freely available lightweight LLM that’s supposed to make it easier for people to run it locally, called Phi3. Decent article from ars technica: https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/
I have used this small R package that allows you to read the text content of a PDF and send it to a local llama model via ollama or one of the large LLM APIs. I could use that to get structured answers in JSON format on a whole folder of papers, but the context length of a typical model is only long enough to hold a single (roughly 40-page) paper in the memory. So I had to get separate structurer answers on each paper and then generate a complete summary from those. Unfortunately that is not user-friendly yet.
Interesting start, yeah looks a bit in weeds for my purposes right now though
I don’t know of one, but I too would be interested to see what this looks like.
How do you currently store and organize PDFs? I used to use Mendeley during grad school, and honestly I really, really liked it. But being able to ask a question and get a natural language response that suggests which papers might contain insights when taken together would be an incredible asset.
Menedely hold out here myself! Tried switching to endnote because mendeley is for all practical purposes abandonware now but the conversion is very painful with loss of a lot of data - notes, organizational structure etc.
Still using the old desktop app nearly daily. If it was still a living project, integrating something like this into mendeley would be incredible.
It’s abandoned!?
I used it circa 2010-14. I believe it was still active then.
That’s a shame. It was a great program. Everyone thought I was weird for not paying for endnote, but it was as good and better!
It still runs, but no updates, they just push their web interface which is very weak compared to desktop app. New user adoption is likely next to nil and most people I talk to under 30 have never heard of it. Unless there is a better tool to switch though I’ll never have time to replicate the organizational infrastructure I’ve built in Mendeley and really like it, so I’ll use it until it dissappears.
I don’t think you can use Retrieval Augmented Genaration or vector databases for a task like that. At least not if you want to compare the whole papers and not just a single statement or fact. And that’d be what most tools are focused on. As far a I know the tools that are concerned with big PDF libraries are meant to retrieve specific information out of the library. Relevant to a specific question from the user. If your task is to go through the complete texts, it’s not the right tool because it’s made to only pick out chunks of text.
I’d say you need an LLM with a long context length, like 128k or way more, fit all the texts in and add your question. Or you come up with a clever agent. Make it summarize each paper individually or extract facts, then feed that result back and let it search for contradictions, or do a summary of the summaries.
(And I’m not sure if AI is up to the task anyways. Doing meta-studies is a really complex task, done by highly skilled professionals of a field. And it takes them months… I don’t think current AI’s performance is anywhere near that level. It’s probably going to make something up instead of outputting anything that’s related to reality.)
Check out Afforai. It’s not perfect at all, but it is on track to do what I want.
Ah, nice. Thanks for sharing.
That would likely be a language model finetuned on said material. The problem is feeding PDFs as a structured data source for the model to ingest. The finetuning can’t happen with random unstructured PDFs
Chroma is supposed to be able to import a ton of information into a vectorized format that lets you search through it in a way that’s semantically meaningful, so you (or your tool) can sort of pick out the stuff from a huge batch of source material that you need to pass to the LLM for any given query.
I played around with it a little bit and I wasn’t able to determine if it was a real thing or just a weird AI hype thing, but people seem to take it seriously. I would bet that someone’s attempted to make a little system on top of it that lets you do stuff like what you’re wanting to do (since that’s what it’s made for), but IDK how well it would work… might be useful to search for stuff adjacent to Chroma or vector databases to see if there are tools like that, though.
Look into RAG using a vector database, this is exactly what they’re for. https://www.linkedin.com/events/buildaragapplicationontheaistac7191489677017649153
Looks a bit beyond me unfortunately, but sounds interesting