Skip to content

instructor

Why Instructor might be a better bet than Langchain

Introduction

If you're building LLM applications, a common question is which framework to use: Langchain, Instructor, or something else entirely. I've found that this decision really comes down to a few critical factors to choose the right one for your application. We'll do so in three parts

  1. First we'll talk about testing and granular controls and why you should be thinking about it from the start
  2. Then we'll explain why you should be evaluating a framework's ability to experiment quickly with different models and prompts and adopt new features quickly.
  3. Finally, we'll consider why long term maintenance is also an important factor and why Instructor often provides a balanced solution, offering both simplicity and flexibility.

How does Instructor work?

For Python developers working with large language models (LLMs), instructor has become a popular tool for structured data extraction. While its capabilities may seem complex, the underlying mechanism is surprisingly straightforward. In this article, we'll walk through a high level overview of how the library works and how we support the OpenAI Client.

We'll start by looking at

  1. Why should you care about Structured Extraction?
  2. What is the high level flow
  3. How does a request go from Pydantic Model to Validated Function Call?

By the end of this article, you'll have a good understand of how instructor helps you get validated outputs from your LLM calls and a better understanding of how you might be able to contribute to the library yourself.

Introduction

As usual, you can find the code for this specific article here

If you've ever used Google Maps, you've definitely struggled to decide where to go to eat. The UI ... frankly sucks beyond belief for an application that has all the data and compute that it has.