I think there's a valid point about the production-readiness aspect. It's one thing to release a research paper, and another to market something as a service. The expectation levels are just different, and fair to scrutinize accordingly.
I took the screenshot of the the bill in their article and ran through the tool at https://va.landing.ai/demo/doc-extraction. The tool doesn't hallucinate any of the value as reported in the article. In fact, the value for Profit/loss for continuing operations is 1654 in their extraction which is the gt, still they've plot a red bbox around it.
good catch on the 1654, will edit that on our blog! try it multiple times, we've noticed esp for tabular data it's fairly nondeterministic. we trialed it over 10 times on many financial CIMs and observed this phenomena.
That's the problem with the current deep learning models, they don't seem to know when they are wrong.
There was so much hype about AlphaGo years ago, which seemed to be very good at reasoning about what's good and what's not, that I thought some form of "AI" is really going come relatively soon. The reality we have these days is that statistical models seem to be running without any constraints, making rules up as they go.
I'm really thankful for the AI-assisted coding, code reviews and many other things that came from that, but the fact is, these really are just assistants that will make very bad mistakes and you need to watch them carefully.
I don't think that's the case, when a model is reasoning, it sometimes starts gaslighting itself and "solving" other problems completely than the one you've shown. Reasoning can help "in general", but very frequently, reasoning also makes it more "nondetermistic". Without reasoning, usually it ends up just writing some code from its training data, but with reasoning, it can end up hallucinating hard. Yesterday, I asked Claude thinking to solve me a problem in c++ and it showed the result in python.
Ah but I (usually) know when I will probably be wrong if I do give an answer, when I know I'm not familiar enough with the subject. Or if I do I will explicitly say this is an educated guess, at best. What I will not do is just spout bullshit with the confidence of an orange-musk-puppet
At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.
It seems like you missed the point. Andrew Ng is not there to give you production grade models. He exists to deliver a proof of concept that needs refinements.
>Here's an idea that could use some polish, but I think as an esteemed AI researcher that it could improve your models. -- Andrew Ng
>OH MY GOSH! IT ISN'T PRODUCTION READY OUT OF THE BOX, LOOK AT HOW DUMB THIS STUFFED SHIRT HAPPENS TO BE!!! -- You
Nobody appreciates a grandstander. You're really treading on thin ice by attacking someone who has given so much to the AI community and asked for so little in return. Andrew Ng clearly does this because he enjoys it. You are here to self-promote and it looks bad on you.
Except it's a video introducing the concept and trying to create buzz around it and inviting people to try it (for free), and providing a link to the page where you can do so. (at least as far as I could tell).
So yes, but not really. This is more like when google released the initial android, and offered it to people to try to get feedback. Yes it's not offered as an obfuscated academic paper in a paywalled journal, but implying the video is promoting a half-baked product as production-ready for quick profit just because it's hosted in a proper landing page is a bit of an extreme take I think.
we respect andrew a lot, as we mentioned in our blog! he's an absolute legend in the field, founded google brain, coursera, worked heavily on baidu ai. this is more to inform everyone not to blindly trust new document extraction tools without really giving them challenges!
> That's the standard tier of competence you expect from Ng. Academia is always close but no cigar.
Academics do research. You should not expect an academic paper to be turned into a business or production overnight.
The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR. It took 70 years of non-commercial research to bring us to the very useful multimodal LLMs of today.
> The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR.
You're about a decade off, the Mark 1 Perceptron was created in 1958 [0]. The original paper (A Logical Calculus of the Ideas Immanent in Nervous Activity) that introduced the idea however was written during WW2 (1943) [1].
It's more they had to wait for processing power to catch up.
One of my bit older friends got an AI doctorate in the 00s, and would always lament a business would never bother reading his thesis, they'd just end up recreating what he did in a few weeks themselves.
It's easy to forget now that in the 90s//00s/10s AI research was mainly viewed as a waste of time. The recurring joke was that general AI was just 20 years away, and had been for the last few decades.
And on the other side, there's companies like Theranos, where you think the world will never be the same again, until you actually try the thing they're selling. Full cigar promised, but not even close.
Not saying this is the case with the OP company, but if you're ready to make sweeping generalizations about cigars like that on the basis of a commercial blog selling a product, you might as well invoke some healthy skepticism, and consider how the generalization works on both sides of the spectrum.
The whole corporation-glorifying, academia-bashing gaslighting narrative is getting very tiring lately.
Why isn't there a pixel comparison step after the extraction? I think that would have identified some errors. Essentially, read, extract, recreate, pixel compare.
we're not the biggest believers in 'agentic' parsing! we definitely do believe there's a specific role for LLMs in the data ingestion pipeline, but this occurs more when bar graphs/charts/figures -> structured markdown.
we're messing around with some agentic zooming around documents internally, will make our findings public!
If you want to try agentic parsing we added support for sonnet-3.7 agentic parse and gemini 2.0 in llamaParse. cloud.llamaindex.ai/parse (select advanced options / parse with agent then a model)
However this come at a high cost in token and latency, but result in way better parse quality. Hopefully with new model this can be improved.
I think a lot of OCR workflows are going the way of multimodal models but I still find that the cloud OCR tools to be vastly superior to most of these other startups in the space like the ad piece here from pulse.
Will we start to see a type of "SLA" from AI model providers? If I rent a server, I can pay for more 9s, but can I pay for a guarantee of accuracy from the models?
You could contact an insurance firm about this. Lots of SLAs are simple forms of this really where you aren't buying reliability you're getting payouts if it falls below some level.
The problem is, you're coming from paper for these PDFs, and this is the step where you add that data.
While the world became much more digitized (for example, for any sale, I get a PDF and an XML version of my receipt, which is great), but not everything is coming from computers and made for humans.
We have hand written notes, printed documents, etc., and OCR has to solve this. On the other hand, desktop OCR applications like Prizmo and latest versions of macOS already have much better output quality when compared to these models. Also there are specialized free applications to extract tables from PDF files (PDF files are bunch of fonts and pixels, they have no information about layout, tables, etc.).
We have these tools, and they work well. Even there's venerable Tessaract, built to OCR scanned papers and have neural network layer for years. Yet, we still try to throw LLMs to everyhting and we cheer like 5 year olds when it does 20% of these systems, and act like this technology doesn't exist, for two decades.
The funny thing is that sometimes we need to machine-read documents produced by humans on machines, but the actual source is almost always machine-readable data.
> The funny thing is that sometimes we need to machine-read documents produced by humans on machines, but the actual source is almost always machine-readable data.
Yes, but it's not possible to connect all systems' backends with each other without some big ramifications, so here we are. :)
I still don’t understand why companies don’t release a machine-readable version of their finance statements. They are read by machines anyway! Export those data from their software is a simple task.
In EU European Securities and Markets Authority (ESMA) mandated machine readable standard from 2020. In the US Financial Data Transparency Act of 2022 (FDTA) made similar push and SEC is working towards it.
claude is definitely better than gpt -- but both have their flaws! they pretty much fall flat on their face with nested entries, low-fidelity images, etc. (we detailed this heavily in our blog post here [1])
other ocr providers are doing a great job - we personally believe we have the highest accuracy tool on the market. we're not here to dunk on anyone just provide unbiased feedback when putting new document extraction tools through a challenge.
I can't believe there's market demand for non deterministic OCR, but what I really suspect is almost no one scans the same document twice and probably don't even realize this is a possibility.
to not make the read extra long, we only included one example. we tried over 50 docs and found a couple with pie charts/bar graphs that weren't parsed at all. there were also a few instances with entire column entires incorrect due to mismatching.
I think there's a valid point about the production-readiness aspect. It's one thing to release a research paper, and another to market something as a service. The expectation levels are just different, and fair to scrutinize accordingly.
I took the screenshot of the the bill in their article and ran through the tool at https://va.landing.ai/demo/doc-extraction. The tool doesn't hallucinate any of the value as reported in the article. In fact, the value for Profit/loss for continuing operations is 1654 in their extraction which is the gt, still they've plot a red bbox around it.
good catch on the 1654, will edit that on our blog! try it multiple times, we've noticed esp for tabular data it's fairly nondeterministic. we trialed it over 10 times on many financial CIMs and observed this phenomena.
That's the problem with the current deep learning models, they don't seem to know when they are wrong.
There was so much hype about AlphaGo years ago, which seemed to be very good at reasoning about what's good and what's not, that I thought some form of "AI" is really going come relatively soon. The reality we have these days is that statistical models seem to be running without any constraints, making rules up as they go.
I'm really thankful for the AI-assisted coding, code reviews and many other things that came from that, but the fact is, these really are just assistants that will make very bad mistakes and you need to watch them carefully.
Most people don’t realize when they’re wrong either. It’s fascinating that, just like with humans, reasoning appears to reduce hallucinations.
At least an AI will respond politely when you point out its mistakes.
I don't think that's the case, when a model is reasoning, it sometimes starts gaslighting itself and "solving" other problems completely than the one you've shown. Reasoning can help "in general", but very frequently, reasoning also makes it more "nondetermistic". Without reasoning, usually it ends up just writing some code from its training data, but with reasoning, it can end up hallucinating hard. Yesterday, I asked Claude thinking to solve me a problem in c++ and it showed the result in python.
Ah but I (usually) know when I will probably be wrong if I do give an answer, when I know I'm not familiar enough with the subject. Or if I do I will explicitly say this is an educated guess, at best. What I will not do is just spout bullshit with the confidence of an orange-musk-puppet
Today, Andrew Ng, one of the legends of the AI world, released a new document extraction service that went viral on X:
https://x.com/AndrewYNg/status/1895183929977843970
At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.
It seems like you missed the point. Andrew Ng is not there to give you production grade models. He exists to deliver a proof of concept that needs refinements.
>Here's an idea that could use some polish, but I think as an esteemed AI researcher that it could improve your models. -- Andrew Ng
>OH MY GOSH! IT ISN'T PRODUCTION READY OUT OF THE BOX, LOOK AT HOW DUMB THIS STUFFED SHIRT HAPPENS TO BE!!! -- You
Nobody appreciates a grandstander. You're really treading on thin ice by attacking someone who has given so much to the AI community and asked for so little in return. Andrew Ng clearly does this because he enjoys it. You are here to self-promote and it looks bad on you.
This is not about some paper Ng published with a new idea that needs some polishing before being useful in the real world.
It's a product released by a company Ng cofounded. So expecting production-readiness isn't asking for too much in my opinion.
Except it's a video introducing the concept and trying to create buzz around it and inviting people to try it (for free), and providing a link to the page where you can do so. (at least as far as I could tell).
So yes, but not really. This is more like when google released the initial android, and offered it to people to try to get feedback. Yes it's not offered as an obfuscated academic paper in a paywalled journal, but implying the video is promoting a half-baked product as production-ready for quick profit just because it's hosted in a proper landing page is a bit of an extreme take I think.
we respect andrew a lot, as we mentioned in our blog! he's an absolute legend in the field, founded google brain, coursera, worked heavily on baidu ai. this is more to inform everyone not to blindly trust new document extraction tools without really giving them challenges!
That's the standard tier of competence you expect from Ng. Academia is always close but no cigar.
> That's the standard tier of competence you expect from Ng. Academia is always close but no cigar.
Academics do research. You should not expect an academic paper to be turned into a business or production overnight.
The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR. It took 70 years of non-commercial research to bring us to the very useful multimodal LLMs of today.
> The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR.
You're about a decade off, the Mark 1 Perceptron was created in 1958 [0]. The original paper (A Logical Calculus of the Ideas Immanent in Nervous Activity) that introduced the idea however was written during WW2 (1943) [1].
[0] https://en.m.wikipedia.org/wiki/Mark_I_Perceptron
[1] https://en.m.wikipedia.org/wiki/A_Logical_Calculus_of_the_Id...
It's more they had to wait for processing power to catch up.
One of my bit older friends got an AI doctorate in the 00s, and would always lament a business would never bother reading his thesis, they'd just end up recreating what he did in a few weeks themselves.
It's easy to forget now that in the 90s//00s/10s AI research was mainly viewed as a waste of time. The recurring joke was that general AI was just 20 years away, and had been for the last few decades.
> The recurring joke was that general AI was just 20 years away, and had been for the last few decades.
You seem to think that joke is out of date now. Many others don't ;)
don't be mistaken, andrew's a legend! he's done some incredible work -- google brain, coursera, baidu ai, etc.
He might not have business chops, but this seems a bit harsh :/
And on the other side, there's companies like Theranos, where you think the world will never be the same again, until you actually try the thing they're selling. Full cigar promised, but not even close.
Not saying this is the case with the OP company, but if you're ready to make sweeping generalizations about cigars like that on the basis of a commercial blog selling a product, you might as well invoke some healthy skepticism, and consider how the generalization works on both sides of the spectrum.
The whole corporation-glorifying, academia-bashing gaslighting narrative is getting very tiring lately.
Why isn't there a pixel comparison step after the extraction? I think that would have identified some errors. Essentially, read, extract, recreate, pixel compare.
Personally I find it frustrating they called it "agentic" parsing when there's nothing agentic about it. Not surprised the quality is lackluster.
we're not the biggest believers in 'agentic' parsing! we definitely do believe there's a specific role for LLMs in the data ingestion pipeline, but this occurs more when bar graphs/charts/figures -> structured markdown.
we're messing around with some agentic zooming around documents internally, will make our findings public!
If you want to try agentic parsing we added support for sonnet-3.7 agentic parse and gemini 2.0 in llamaParse. cloud.llamaindex.ai/parse (select advanced options / parse with agent then a model)
However this come at a high cost in token and latency, but result in way better parse quality. Hopefully with new model this can be improved.
I think a lot of OCR workflows are going the way of multimodal models but I still find that the cloud OCR tools to be vastly superior to most of these other startups in the space like the ad piece here from pulse.
Will we start to see a type of "SLA" from AI model providers? If I rent a server, I can pay for more 9s, but can I pay for a guarantee of accuracy from the models?
You could contact an insurance firm about this. Lots of SLAs are simple forms of this really where you aren't buying reliability you're getting payouts if it falls below some level.
OCR, VLM or LLM for such important use cases seems like a a problem we should not have in 2025.
The real solution would be to have machine readable data embedded in those PDFs, and have the table be built around that data.
We could then we actual machine readable financial statements or reports, much like our passports.
The problem is, you're coming from paper for these PDFs, and this is the step where you add that data.
While the world became much more digitized (for example, for any sale, I get a PDF and an XML version of my receipt, which is great), but not everything is coming from computers and made for humans.
We have hand written notes, printed documents, etc., and OCR has to solve this. On the other hand, desktop OCR applications like Prizmo and latest versions of macOS already have much better output quality when compared to these models. Also there are specialized free applications to extract tables from PDF files (PDF files are bunch of fonts and pixels, they have no information about layout, tables, etc.).
We have these tools, and they work well. Even there's venerable Tessaract, built to OCR scanned papers and have neural network layer for years. Yet, we still try to throw LLMs to everyhting and we cheer like 5 year olds when it does 20% of these systems, and act like this technology doesn't exist, for two decades.
The funny thing is that sometimes we need to machine-read documents produced by humans on machines, but the actual source is almost always machine-readable data.
Agree on the hand-written part.
> The funny thing is that sometimes we need to machine-read documents produced by humans on machines, but the actual source is almost always machine-readable data.
Yes, but it's not possible to connect all systems' backends with each other without some big ramifications, so here we are. :)
I still don’t understand why companies don’t release a machine-readable version of their finance statements. They are read by machines anyway! Export those data from their software is a simple task.
In EU European Securities and Markets Authority (ESMA) mandated machine readable standard from 2020. In the US Financial Data Transparency Act of 2022 (FDTA) made similar push and SEC is working towards it.
What has agents do with document parsing? Is it just extracting the text and use an LLM to analyze the extracted data?
How does pulse compare to reducto and gemini? Claude is actually pretty good at PDFs (much better than GPT)
claude is definitely better than gpt -- but both have their flaws! they pretty much fall flat on their face with nested entries, low-fidelity images, etc. (we detailed this heavily in our blog post here [1])
other ocr providers are doing a great job - we personally believe we have the highest accuracy tool on the market. we're not here to dunk on anyone just provide unbiased feedback when putting new document extraction tools through a challenge.
[1]: https://www.runpulse.com/blog/why-llms-suck-at-ocr
I can't believe there's market demand for non deterministic OCR, but what I really suspect is almost no one scans the same document twice and probably don't even realize this is a possibility.
Honestly he’s famous for pedagogy and research papers, not real world products.
Not surprised it’s underwhelming
What about Coursera? It's a real world product.
> Pedagogy
good read, saw your recent raise in BI - congrats!
appreciate it!
thanks man!
> - Over 50% hallucinated values in complex financial tables
> - Completely fabricated numbers in several instances
Why are these different bullet points? Which one is correct number of wrong values?
to not make the read extra long, we only included one example. we tried over 50 docs and found a couple with pie charts/bar graphs that weren't parsed at all. there were also a few instances with entire column entires incorrect due to mismatching.
https://x.com/svpino/status/1592140348905517056
""" In 2017, a team led by Andrew Ng published a paper showing off a Deep Learning model to detect pneumonia.
[...]
But there was a big problem with their results:
[...]
A random split would have sent images from the same patient to the train and validation sets.
This creates a leaky validation strategy.
"""
He's not infallible.
>grifter grifts diggity