5 Comments
User's avatar
Yexi's avatar

When GPT-3 came out, I wrote a similar article to discuss the practical limitations of GPT-3 by that time. Those practical limitation can be resolved over time.

That article also discussed the theoretical limitation of regression based solution (any statistical based machine learning solution that uses loss to quantify the effectiveness of the solution), and reasoning/logical limitation (not even human being is able to solve).

Glad to see that most of the practical limitations have been overcome within 1 year. Unfortunately to see people still hyper on the current breakthrough of AI tech and think it can solve every problem.

This article can be seen at https://yexijiang.substack.com/p/back-to-untitled-1a93f4026086.

Expand full comment
Forest's avatar

To be fair, it is very hard to prove insightful theoretical limitation for deep learnings models. On one hand, based on unrealistic assumptions (infinite precisions), people have proved that Transformers are Turing complete. On the other hand, empirically, Transformers have shown obvious limitations on reasoning. The added complexity is that the way to use transformers change over time, and CoT definitely increases LLM's reasoning capabilities.

The recent papers that I have found and referenced in the writing, painted a picture that's much closer to the reality than previous papers, and one of the paper even gives learnability based result, which is much more realistic.

Expand full comment
Yexi's avatar

There is theoretical limitation of Turing machine and logic in general. Deep learning can not a superset of Turing machine, so it is also constrained by whatever limitation of it has. And more generally, there is limitation on the mathematical and logic, which is proved by Godel incompleteness theorem. And surely, the ceiling is super high and we are very far from that ceiling.

Regarding CoT, I implemented a Chain-of-Tasks, which extends the one-inference prompting to multi-round reasoning. See the demo from https://www.linkedin.com/posts/yxjiang_ai-futureofproblemsolving-agents-activity-7179671918906658816-NTC1?utm_source=share&utm_medium=member_desktop.

I observed nowadays people have zealot on the recent progress of AI. The progress is great, but we still need to be sober on what it can do, what it cannot.

Expand full comment
Forest's avatar

Just to close the loop, here is my thoughts on the implications of the existence of undecidable problems and Godel's incompleteness theorem. https://open.substack.com/pub/theunscalable/p/the-curse-of-generalization?utm_source=share&utm_medium=android&r=1g1flx To put it simply, I consider them showing the intrinsic difficulty of general problem solving and why there will never be a once and for all solution.

Expand full comment
Forest's avatar

You reminded me that it might be a good topic to discuss the history of computation and the implications of limitation theorems. Stay tuned :)

Expand full comment