When Claude Started Cooking (But Needed Grandma’s Cookbook First)
About This Project
The Recipe That Made Me Realize AI Needs Context
“Give me a recipe using chicken, pasta, and tomatoes.”
I typed that into my freshly-connected Claude API integration, hit enter, and watched as it generated what I can only describe as the most technically correct, completely soulless recipe I’d ever seen. Everything was there: ingredients with measurements, step-by-step instructions, even nutritional information. It was perfectly fine, objectively functional, and exactly the kind of thing you’d get from literally any recipe generator on the internet.
That’s when I realized the problem wasn’t with Claude. The problem was that I’d asked an AI to cook without teaching it what good cooking actually looks like.
So I did something that seemed ridiculous at first: I fed it my cookbook collection. Not metaphorically- I literally processed PDF cookbooks, extracted every recipe, converted them to vector embeddings, and built a semantic search system so Claude could reference real recipes before generating new ones. And honestly, watching the AI go from producing generic “Chicken Pasta with Tomato Sauce” to suggesting “Pan-Seared Chicken with Penne Arrabbiata, drawing inspiration from Marcella Hazan’s technique for reducing tomatoes” felt like watching a cooking student finally understand why you wait for the pan to get hot.
Vector Embeddings: Teaching AI About Food Through Math
Here’s the thing about vector embeddings that nobody really explains well: they’re essentially mathematical representations of meaning. When you convert a recipe to a 384-dimensional vector, you’re capturing not just the words, but the relationships between ingredients, techniques, and flavors. Similar recipes end up close together in this mathematical space, even if they use completely different words.
Building the cookbook processor meant diving deep into Python, ChromaDB, and the slightly terrifying world of PDF text extraction. I started with five high-quality recipes as test data- Italian, French, Thai, American classics- and built the pipeline: extract text from PDFs, identify recipe sections using NLP patterns, chunk the text intelligently, convert to embeddings, store in a vector database.
The magic happens when someone asks for a recipe. Before Claude generates anything, my system searches the vector database for similar recipes. “Chicken pasta with tomatoes” pulls up recipes for arrabbiata, carbonara variations, one-pot pastas with similar ingredients. Claude gets to see how real chefs approach these flavor combinations, what techniques they use, how they balance ingredients. The AI doesn’t just generate a recipe anymore- it generates a recipe informed by actual cooking knowledge.
What surprised me most was how much this changed the quality of results. Instead of generic instructions like “cook until done,” I started getting specific guidance: “cook pasta until al dente, about 2 minutes less than package directions, reserving 1 cup pasta water for sauce.” That’s not Claude being randomly specific- that’s Claude learning from cookbook authors who actually understand cooking.
When Features Start Talking to Each Other
I built the recipe generator thinking it would be a standalone tool. Generate recipes, display them nicely, maybe save favorites. Done. Except once I had recipes flowing, my wife asked a completely reasonable question: “This is cool, but how do I plan meals for the week?”
That question spiraled into building a meal planning system with drag-and-drop calendar slots, which then spiraled into realizing people would need shopping lists, which meant building ingredient extraction and consolidation logic. Suddenly my “simple recipe generator” had become a full kitchen management system.
The interesting part wasn’t building each feature- it was making them actually work together. When you add a recipe to your meal plan, the system extracts every ingredient. When you view your shopping list, it consolidates duplicate ingredients (“2 tbsp olive oil” + “3 tbsp olive oil” = “5 tbsp olive oil”), categorizes everything (produce, dairy, meat, pantry), and lets you check items off as you shop. Export the list and you’ve got a formatted text file ready for your phone.
What I didn’t expect was how satisfying it felt to build features that enhance each other. The recipe generation informs meal planning, meal planning generates shopping lists, and the cookbook context makes every recipe actually useful. It’s the difference between building isolated tools versus building an ecosystem where each piece makes the others better.
React Context vs. Redux: Choosing Simple Over Clever
Coming from backend development, I expected state management in React to be straightforward. Then I started reading about Redux, MobX, Zustand, Recoil, Jotai, and about seventeen other options that all promised to solve state management in their own special way. The paradox of choice is real.
I went with React Context API plus useReducer, and honestly, I haven’t regretted it once. Here’s why: for an application where multiple features need to share state (recipes, meal plans, shopping lists, user preferences), Context gives you exactly what you need without the ceremony of external state libraries.
The reducer pattern feels natural once you get it. Every state change is an action that flows through one function. Want to add a recipe? Dispatch an action. Toggle a favorite? Dispatch an action. Update the shopping list? You get it. All state transitions are predictable, debuggable, and follow the same pattern.
javascript
dispatch({
type: 'ADD_MEAL_TO_PLAN',
payload: { date: '2024-01-15', meal: 'dinner', recipe: chickenPasta }
});
What makes this pattern powerful is how it handles complexity. Adding a recipe to the meal plan might also need to update the shopping list, mark the recipe as used, and recalculate weekly nutrition. With a reducer, all that logic lives in one place, executes atomically, and you can’t accidentally forget a step.
The learning curve exists, but it’s not steep. useReducer is just useState with more structure. Once you understand that state is your data and dispatch is how you change it, everything else falls into place.
The Features I Didn’t Plan to Build
Every developer knows this story: you build Feature A, which makes Feature B seem obvious, which naturally suggests Feature C, and suddenly you’re three features deep and wondering how you got here. My meal planning system started simple- just drag recipes onto calendar slots. But then:
- People want to see total calories for the week → nutrition tracking
- People forget what they planned last week → meal plan history
- People want to meal prep Sunday → batch shopping list generation
- People have favorite meal patterns → meal plan templates (future feature)
The temptation is to build everything at once. I learned to resist that. Each feature got built when it became genuinely necessary, not when it seemed cool. The shopping list feature sat on my roadmap for weeks before I actually built it, because I wanted to be sure it solved a real problem and not just added complexity.
But here’s what’s interesting: even the features I didn’t fully build taught me something. I started work on social sharing before realizing nobody wants to share their meal plan with strangers- they want to share specific recipes with friends. That realization saved me from building an entire social network nobody would use.
Tailwind CSS: When You Stop Writing CSS
I’ve written CSS for over a decade. I know specificity wars, I understand the cascade, I’ve battled float layouts and won. So using Tailwind felt like cheating at first- just add utility classes until things look right? That’s not real CSS!
Except it absolutely is, and it’s better. When you write className="grid md:grid-cols-2 lg:grid-cols-3 gap-4", you’re being explicit about responsive behavior in a way that’s immediately understandable. No media queries hidden in separate files, no wondering which breakpoint applies, just straightforward “mobile gets one column, medium gets two, large gets three.”
The killer feature isn’t the utilities themselves- it’s the constraint system. When everything uses the same spacing scale, the same color palette, the same border radiuses, your entire app develops visual consistency automatically. I didn’t make design decisions about spacing; I used Tailwind’s spacing scale and everything naturally harmonized.
Does it make your JSX longer? Sure. Is that a problem? Not really. I’d rather see className="bg-blue-500 hover:bg-blue-600 px-4 py-2 rounded" and immediately understand the component’s appearance than hunt through separate CSS files trying to figure out what .button-primary actually does.
The API That Almost Works Too Well
Integrating Claude’s API felt remarkably straightforward- almost suspiciously so. Make a POST request, include your prompt and parameters, parse the JSON response. Standard REST API stuff. The tricky parts came from unexpected places.
First challenge: getting Claude to consistently return valid JSON. LLMs are conversational by nature; they want to explain and elaborate. Getting Claude to return only a JSON object without surrounding text took careful prompt engineering: “Return ONLY a valid JSON object. Do not include any text before or after the JSON. Do not use markdown code blocks.”
Second challenge: handling the cookbook context without overwhelming the prompt. I built a preprocessing step that fetches relevant recipes from the vector database, extracts key techniques and flavor profiles, and formats them into a concise context block. Too much context and you hit token limits; too little and the cookbook knowledge doesn’t influence the output.
Third challenge: CORS in development. Browsers block direct API calls to external services, so I set up a Vite proxy during development. In production, you’d need a proper backend proxy to keep your API key secure. For now, the API key lives in environment variables and users understand it’s a development setup.
What impressed me most was Claude’s ability to understand cooking context. Give it cookbook examples of proper pasta technique and it starts suggesting pasta water for sauce consistency. Show it recipes that build layers of flavor and it stops suggesting “add all ingredients at once.” The AI genuinely learns from the examples, not just templates from them.
Building for Real Cooking, Not Instagram Cooking
The biggest shift in my thinking came from actually using the app while cooking. I’d built this beautiful interface with perfectly formatted recipe cards, and then I tried following a recipe while my hands were covered in raw chicken. Suddenly the tiny text and small touch targets seemed absurd.
Real cooking means:
- Your phone is propped on the counter getting splattered
- You’re quickly glancing between recipe and pan
- You need to check off ingredients as you add them
- You’ll scroll with one knuckle because your fingers are messy
So I rebuilt the recipe display with larger text, bigger checkboxes, higher contrast, more whitespace. The “compact view” option exists because sometimes you need to see four recipes at once while meal planning, but the default view assumes you’re actually cooking.
The shopping list got the same treatment. In the grocery store, you’re pushing a cart, probably juggling kids or bags, moving quickly through aisles. Big checkboxes, clear categories, simple interactions. The export feature exists because honestly, sometimes you just want the list in your phone’s notes app where you can tick things off without unlocking the screen.
These aren’t features that make for impressive demos, but they’re the difference between software someone uses once to see how it works and software someone actually integrates into their daily routine.
What Building This Actually Taught Me
The technical skills were expected: React patterns, API integration, state management, vector embeddings. But the unexpected lessons came from building something people would actually use in their kitchens.
Feature bloat is tempting. Every cooking-related feature seems like it should be in a cooking app. But recipe scaling, ingredient substitution, cooking timers, pantry inventory- each adds complexity and maintenance burden. I learned to ask “does this make cooking easier or just make the app bigger?”
User interface matters more than I expected. Beautiful design is nice, but usable design while cooking is essential. If someone can’t easily read your recipe while stirring a pan, your font sizes are wrong regardless of how nice they look in Figma.
Integration beats isolation. A recipe generator alone is fine. A meal planner alone is fine. A shopping list alone is fine. But connecting them creates something more valuable than the sum of the parts. The same recipe flows through planning, shopping, and cooking without any manual copying or reformatting.
AI needs context to be useful. Claude is impressively capable, but generic prompts get generic responses. Feed it cookbook knowledge, specific techniques, flavor profiles, and it generates recipes that feel like they came from a real cookbook rather than an algorithm.
The Features Still on the List
Every project has that backlog of “wouldn’t it be cool if…” features. Mine includes:
- Batch cooking intelligence: Identify recipes with overlapping prep (multiple dishes using diced onions) and suggest efficient prep order
- Ingredient substitution: Out of buttermilk? The app should know milk + lemon juice works
- Leftover management: Had roast chicken Monday? Here are five recipes for leftover chicken
- Seasonal awareness: Suggest recipes based on what’s actually in season
- Nutrition goal tracking: Not just displaying nutrition info but helping people meet dietary goals
- Recipe scaling: Automatically adjust ingredients for different serving sizes
- Cook-along mode: Step-by-step interface with timers and ingredient checking
Some of these I’ll build. Some I’ll realize aren’t actually necessary. Some will get replaced by better ideas. That’s how personal projects work- they evolve based on actual use rather than theoretical planning.
Why This Matters More Than I Expected
I built this because I wanted to solve a personal problem: coming up with weeknight dinner ideas without resorting to the same five recipes on rotation. What I ended up building is something that’s changed how my family approaches cooking and meal planning.
We actually use the meal planner. We actually generate shopping lists and check items off in the store. We actually try new recipes the AI suggests based on what’s in the fridge. The software has become genuinely useful, not just technically interesting.
There’s something deeply satisfying about building tools you’ll use daily. Not demo software, not proof-of-concept projects, but actual utilities that solve real problems in your life. Every time I generate a shopping list for the week or drag a recipe into Thursday’s dinner slot, I’m using something I built specifically to make that task easier.
And honestly, that’s probably the best outcome for any side project: when it stops being a project and just becomes a tool you rely on. The code is still evolving, features are still being added, but somewhere along the way it crossed the threshold from “thing I’m building” to “thing I use.”
Which is exactly what good software should do- fade into the background while making life just a little bit easier.