I gave a short talk at NUS in January 2025 about structured outputs and how they enable faster iteration and testing when building language models. I've written up a more detailed version of the talk here as well as provided the slides below.
LLM applications in 2025 face a unique challenge: while they enable rapid deployment compared to traditional ML systems, they also introduce new risks around reliability and safety.
In this article, I'll explain why structured outputs remain crucial for building robust LLM applications, and how they enable faster iteration and testing.
If you're interested in the code for this article, you can find it here where I've implemented a simplified version of CLIO without the PII classifier and most of the original prompts ( to some degree ).
Analysing chat data at scale is a challenging task for 3 main reasons
Privacy - Users don't want their data to be shared with others and we need to respect that. This makes it challenging to do analysis on user data that's specific
Explainability - Unsupervised clustering methods are sometimes difficult to interpret because we don't have a good way to understand what the clusters mean.
Scale - We need to be able to process large amounts of data efficiently.
An ideal solution allows us to understand broad general trends and patterns in user behaviour while preserving user privacy. In this article, we'll explore an approach that addresses this challenge - Claude Language Insights and Observability ( CLIO ) which was recently discussed in a research paper released by Anthropic.
We'll do so in 3 steps
We'll start by understanding on a high level how CLIO works
We'll then implement a simplified version of CLIO in Python
We'll then discuss some of the clusters that we generated and some of the limitations of such an approach
I've been a heavy user of Claude for the past few months and anecdotally, ever since Sonnet 3.6, I've been using it more and more.
I was kind of curious to see how I use it on a day to day basis and so when I realised I could export my claude chat history, I thought I'd try to do some analysis on it.
After a year of building AI applications and contributing to projects like Instructor, I've found that getting started with language models is simpler than most people think. You don't need a deep learning background or months of preparation - just a practical approach to learning and building.
Here are three effective ways to get started (and you can pursue all of them at once):
Daily Usage: Put Claude, ChatGPT, or other LLMs to work in your daily tasks. Use them for debugging, code reviews, planning - anything. This gives you immediate value while building intuition for what these models can and can't do well.
Focusing on Implementation: Start with Instructor and basic APIs. Build something simple that solves a real problem, even if it's just a classifier or text analyzer. The goal is getting hands-on experience with development patterns that actually work in production.
Understand the Tech: Write basic evaluations for your specific use cases. Generate synthetic data to test edge cases. Read papers that explain the behaviors you're seeing in practice. This deeper understanding makes you better at both using and building with these tools.
You should and will be able to do all of these at once. Remember that the goal isn't expertise but to discover which aspect of the space you're most interested in.
There's a tremendous amount of possible directions to work on - dataset curation, model architecture, hardware optimisation, etc and other exiciting directions such as Post Transformer Architectures and Multimodal Models that are happening all at the same time.
2024 has been a year of remarkable transformation. Just two and a half years out of college, I went from feeling uncertain about my path in software engineering to finding my stride in machine learning engineering. It's been a journey of pushing boundaries – improving my health, contributing to open source, and diving deeper into research.
The year has felt like a constant acceleration, especially in the last six months, where everything from technical growth to personal development seemed to shift into high gear.
Four achievements stand out from this transformative year:
Helped grow instructor from ~300k downloads to 1.1M downloads this year as core contributor
Quit my job as a swe and started working full time with llms
Got into better shape, lost about 6kg total and total cholesterol dropped by 32% w lifestyle changes
Delivered a total of 4 technical talks this year for the first time
I had a lot of fun playing around with text to image models over the weekend and thought I'd write a short blog post about some of the things I learnt. I ran all of this on Modal and spent ~10 USD across the entire weekend which is honestly well below the Modal $20 free tier credit.
This was mainly for a small project i've been working on called CYOA where users get to create their own stories and have a language model automatically generate images and choices for each of them.
Although many tasks require subjective evaluations, I've found that starting with simple binary metrics can get you surprisingly far. In this article, I'll share a recent case study of extracting questions from transcripts. We'll walk through a practical process for converting subjective evaluations to measurable metrics:
Using synthetic data for rapid iteration - instead of waiting minutes per test, we'll see how to iterate in seconds
Converting subjective judgments to binary choices - transforming "is this a good question?" into "did we find the right transcript chunks?"
Iteratively improving prompts with fast feedback - using clear metrics to systematically enhance performance
By the end of this article, you'll have concrete techniques for making subjective tasks measurable and iterating quickly on LLM applications.
I'm writing this as I take a train from Kaohsiung to Taipei, contemplating a question that frequently surfaces in AI discussions: Could anyone clone an LLM application if they had access to all its prompts?
In this article, I'll challenge this perspective by examining how the true value of LLM applications extends far beyond just a simple set of prompts.
We'll explore three critical areas that create sustainable competitive advantages
People always talk about looking at your data but what does it actually mean in practice?
In this post, I'll walk you through a short example. After examining failure patterns, we discovered our query understanding was aggressively filtering out relevant items. We improved the recall of a filtering system that I was working on from 0.86 to 1 by working on prompting the model to be more flexible with its filters.
There really are two things that make debugging these issues much easier
A clear objective metric to optimise for - in this case, I was looking at recall ( whether or not the relevant item was present in the top k results )
A easy way to look at the data - I like using braintrust but you can use whatever you want.
Ultimately debugging these systems is all about asking intelligent questions and systematically hunting for failure modes. By the end of the post, you'll have a better idea of how to think about data debugging as an iterative process.
This is an article that sums up a talk I'm giving in Kaoshiung at the Taiwan Hackerhouse Meetup on Dec 9th. If you're interested in attending, you can sign up here
When building LLM applications, teams often jump straight to complex evaluations - using tools like RAGAS or another LLM as a judge. While these sophisticated approaches have their place, I've found that starting with simple, measurable metrics leads to more reliable systems that improve steadily over time.
I think there are five levels that teams seem to progress through as they build more reliable language model applications.
Structured Outputs - Move from raw text to validated data structures
Prioritizing Iteration - Using cheap metrics like recall/mrr to ensure you're nailing down the basics
Fuzzing - Using synthetic data to systmetically test for edge cases
Segmentation - Understanding the weak points of your model
LLM Judges - Using LLM as a judge to evaluate subjective aspects
Let's explore each level in more detail and see how they fit into a progression. We'll use instructor in these examples since that's what I'm most familiar with, but the concepts can be applied to other tools as well.