How AI Agentic Workflows Could Drive More AI Progress Than Even the Next Generation of Foundation Models
Dr. Andrew Ng discusses how AI agentic workflows could revolutionize AI development, surpassing the impact of next-generation foundation models.
Join the DZone community and get the full member experience.
Join For FreeThe Limitations of Zero-Shot Prompting
In a fascinating presentation at DevDay during the Snowflake Data Cloud Summit, Dr. Andrew Ng, founder and CEO of DeepLearning.ai and LandingAI, shared his insights on the potential of AI agentic workflows to revolutionize the field of artificial intelligence. Dr. Ng argued that these iterative, multistep approaches could lead to even greater advancements than the development of more powerful foundational language models.
Traditional language models, like GPT-3.5 and GPT-4, have demonstrated remarkable capabilities in zero-shot prompting, where the model generates an output based on a single prompt without any revision. However, this approach is akin to asking a person to write an essay from start to finish without allowing them to backspace or make any edits. Despite the impressive results, there are limitations to this method.
The Promise of Agentic Workflows
In contrast, agentic workflows enable AI models to tackle problems in a more iterative and human-like manner. These workflows allow the model to break down a task into smaller steps, gather information, generate drafts, and then revise and improve upon its work. This approach has shown significant promise in both coding and computer vision applications.
Dr. Ng presented data comparing the performance of GPT-3.5 and GPT-4 on the HumanEval coding benchmark. While GPT-4 outperformed GPT-3.5 in zero-shot prompting, the real breakthrough came when GPT-3.5 was wrapped in an agentic workflow. This combination achieved results comparable to GPT-4, suggesting that the iterative process could be as important as the underlying model's capabilities.
Landing AI's Vision Agent
Landing AI has recently open-sourced its Vision Agent, which showcases the potential of agentic workflows in computer vision tasks. By providing a prompt, such as "Calculate the distance to the shark in this surfing video," the Vision Agent can generate a series of instructions, retrieve the necessary tools (functions), and produce code to analyze the video and output the desired results.
The Vision Agent consists of two components: a Code Agent and a Test Agent. The Code Agent first runs a planner to break down the task, retrieves detailed descriptions of the required tools, and generates the code. The Test Agent then writes tests for the generated code, executes them, and provides feedback to the Code Agent for further refinement.
Examples and Limitations
Dr. Ng demonstrated the Vision Agent's capabilities through several examples, including analyzing a video of car crashes, highlighting interesting parts of CCTV footage, and detecting masked and unmasked people in an image. While the Vision Agent is not perfect and can sometimes miss objects or require prompt refinement, it showcases the potential of agentic workflows to streamline and simplify complex computer vision tasks.
The implications of agentic workflows extend beyond coding and computer vision. By enabling AI models to plan, research, generate, and revise their outputs, these workflows could lead to significant advancements in various domains, such as natural language processing, data analysis, and creative applications.
The Future of AI Development
As AI continues to evolve, it is essential to explore new approaches that can unlock the full potential of these technologies. While foundational models like GPT-4 have pushed the boundaries of what is possible, agentic workflows could be the key to driving even greater progress in the field.
Dr. Ng's presentation serves as a call to action for developers and researchers to embrace agentic workflows and contribute to their development. By collaborating and building upon open-source projects like Landing AI's Vision Agent, the AI community can accelerate the adoption and refinement of these powerful techniques.
In conclusion, Dr. Andrew Ng's presentation at DevDay highlighted the immense potential of AI agentic workflows to drive AI progress, potentially even surpassing the impact of next-generation foundation models. By enabling AI models to tackle problems in a more iterative and human-like manner, these workflows could lead to breakthroughs in coding, computer vision, and beyond. As the AI community continues to explore and refine these approaches, we may be on the cusp of a new era in artificial intelligence, one that promises to transform industries and reshape our understanding of what is possible with AI.
Opinions expressed by DZone contributors are their own.
Comments