DSPy in the News!

2 minute read

Published:

DSPy Logo

In the fast-evolving landscape of Machine Learning, the potential of language models (LMs) is rapidly expanding. DSPy aims to revolutionize the way we approach and optimize LM pipelines. I am thrilled to share that our work has garnered significant attention and discussion on various platforms.

DSPy Brief Overview:

Paper Accepted to the NeurIPS 2023 Workshop R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Large Foundation Models

Under review.

The abstract of DSPy reads:

The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded “prompt templates”, i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5.

Twitter Buzz:

  • Ycombinator: A discussion around the potential of DSPy in the broader context of ML innovations.

  • Synthical: Our research was highlighted for its groundbreaking approach to LM pipelines.

  • Deep Learning Monitor: An in-depth feature that delves into the technicalities and outcomes of our work.

  • ScienceCast: Our research was showcased for its potential to revolutionize ML methodologies.

  • PaperReading Club Page: A spotlight on our work and its implications for future ML research.

  • HuggingFace Paper Page: Our paper was featured among other prominent research works in the field.

The recognition and discussions around our research have been immensely motivating. I am grateful for the support and feedback from peers and experts alike.