Winning TreeHacks: Re-defining education in 2026
How I built Minerva, and how it won twice at the best collegiate hackathon in the world
This is v1 of the blog post. I'll be updating this post with more details later.
Last weekend, I travelled to Stanford (third time at SFO in the past year) to attend TreeHacks 12. Going into it, I had a couple goals:
- Win a prize
- Build something that I would be proud of
- Get lots of free swag
- Maybe get an internship offer for the summer
- Have fun
Day 1: The Beginning
Arriving at TreeHacks
I arrived at SFO on Friday morning, and met up with my friend HorseNuggets. He's a developer at Roblox, and was kind enough to offer a tour of the Roblox campus. After that, I took the Caltrain down to San Jose to kill a couple hours at the Microcenter, and pick up some white monsters.
I also forgot my luggage at the airport security, which made for a funny LinkedIn post.
Flight
I got to Stanford at around 4:00 PM, where the line to get in was insanely long. After checking in, I received my badge and swag bag, and then met up with my team: Ular Kimsanov, Anton Angeletti, and Manav Joshi. We grabbed some food and headed to the opening ceremony, where Garry Tan, CEO of Y Combinator, gave a passionate speech, calling for people to build things that matter, and urging everyone to "boil the ocean." Sam Altman, CEO of OpenAI, then joined the TreeHacks team on stage for a candid interview.
My TreeHacks 2026 Badge
Hacking Starts
Anton had already found/claimed one of the only private study rooms available in the building, so we all piled in and started setting up. Having this room to ourselves really helped us focus on the task at hand, and even get some sleep at night with a air mattress Anton also brought.
A week before the hackathon, we had already compiled a list of viable ideas, and we decited on going with our initial idea: an AI tutoring platform that can help students learn any topic. We then decided to brainstorm a bit more, and landed on this: An AI tutor that you can video call (With HeyGen's LiveAvatar) and have a natural conversation with. The tutor would be able to help the student with any topic, and have the tools to help visually represent the concepts.
After that, we spent the next few hours researching and planning out the project. Then Me, Manav, and Anton decided to try and get some sleep. Ular, on the other hand, wasn't tired and decided to stay up all night and get the project bootstrapped. This included setting up the Next.js project, and getting the Zoom Video SDK to display the HeyGen LiveAvatar in the browser.
Day 2 - The Fun Part
Saturday morning, I woke up with only 3 hours of sleep (I wasn't really able to sleep), and took over the project from Ular. While I was doing that, Manav focused on going around to the different sponsors and pitching our ideas to them. This is a crucial part of the hackathon, as it'll get your team on the radar of the sponsors, and they may help steer you in the right direction.
Remember when I said I wanted to have fun? In previous hacakthons, I would spend the entire hackathon coding, and not really enjoying the experience. This time, I wanted to make sure I participated in as many activities as possible, and really enjoy the experience. So, in between coding sessions (while I had a coding agent running), I would go around to different sponsor booths, chat with the sponsors, and get free swag.
They even brought live llamas outside the venue... because why not? It was a great little break from staring at a screen.
At one point, the Poke booth had a vending machine full of Apple products. The catch? You had to convince an AI (Poke in bouncer mode) that you deserved it. I walked away with free AirPods, but in hindsight, if I'd just said "I want the iPhone 17 Pro" instead of "I'll take either the AirPods or the iPhone," I probably could have walked away with a much bigger win. Manav proved it too, by walking straight up and asking for the phone, and getting it. Lesson learned. 🙃
We ended off the day with a lightsaber fight outside the venue for a LEGO starwars set, which (of course) Manav won.
of course, all of this was sprinkled in between long coding sessions. We ended up getting the MVP completed by midnight. We then spent the rest of the night polishing up the project, and working on getting the latency down.
The airpods I won
Day 3 - Winning
By morning, we were all exhausted, but we had a polished demo ready to go.
But then... disaster struck.
Everything was working perfectly 30 minutes before hacking ended. But then, as we were trying to record our demo video, our logic for displaying Manim (math animation) videos broke.
We scrambled to fix it, and just five minutes before the devpost was due, we managed to get it working and recorded our demo video.
Judging
Once judging started, we made our way to our table and set up. I had brought my DJI Mic Mini with me (and was using it to vlog), so we used it as a input microphone for our demo. However, one thing we didn't account for was the background noise. The mic was picking up our voices, but the output was practically silent. We also didn't have a speaker to boost the volume.
We were stationed next to a locked door, so Manav went around the building, found another entrance that was open, and let us in. Ular and I set up just outside and funneled judges into the empty area beyond the door, where it was much quieter.
We then proceeded to demo our project, and it went surprisingly well. None of the demos failed, and we even heard the HeyGen people were very impressed with our project.
The Results
The Zoom and HeyGen teams both had to leave the hackathon early, so they let us know we won First Place in the Education Track and Best Creation with HeyGen Avatar API tracks!!
First Place in the Education Track
So... What did we build?
The Video tutor
Minerva is an AI tutor that you video call. Under the hood, we use HeyGen's LiveAvatar to render a live avatar and the Zoom Video SDK to display it in the browser. The user's voice is transcribed using the web SpeechRecognition API. We use HeyGen's Full mode purely for text-to-speech, but bypass its voice input entirely so that we can handle LLM inference on our end; giving us full control over tool calls and response logic.
The Secret Sauce
For visual explanations, Manim animations are rendered on the fly by delegating code generation to a slower but smarter model (Claude Opus 4.5), which writes and executes the Python Manim code to produce the animations. We also integrate Desmos, Desmos 3D, and GeoGebra to render math equations, graphs, and 3D objects. The model can additionally generate custom "applets" by writing HTML, CSS, and JS that get embedded directly into the page via an iframe.
What I learned
Execution and presentation matter just as much as, if not more than, the technical implementation. Minerva isn't a groundbreaking idea, nor is it a very technically impressive project, but the way we executed and presented it is what made it stand out. If a project doesn't "click" for the judges, it's not going to win. We spent a lot of time polishing the experience and making sure it was intuitive and easy to understand. We also made a point of looping in judges throughout the event, asking for their opinions and feedback, and making ourselves known to them. At the end of the day, they're the ones deciding who wins, so that relationship matters. It paid off: we had multiple judges come up to us saying "We heard about this project and had to come over to check it out." (they don't have time to judge every project in person)
Having been to 10 hackathons, I've noticed a consistent trend: especially with the rise of AI, technical implementation is becoming less and less of a differentiator. Anyone can write a few sentences (along with "make no mistakes") prompt and get modern models to build almost anything. The real differentiator now is the quality of execution and how well you can present the idea. (what makes your project stand out from all the other AI slop?)
Prompt engineering was also critical to getting the best results out of the model. We ran into many issues where it would behave unexpectedly (like trying to visualize the solar system on the graphing calculator), and I spent a significant amount of time tweaking the system prompt, carefully adjusting the wording and adding plenty of examples to guide the model toward the right behavior.
What's next?
We're in the process of rebuilding Minerva to be production-ready, and will release it in the coming months :)
As for TreeHacks, I'll definitely be back next year. There was something special about the atmosphere at TreeHacks that I've never experienced at other hackathons. Perhaps it was the high bar for entry and the selectiveness, or it was the density of genuinely talented, motivated people all in one room, everyone you talked to was building something interesting and had a story worth hearing (especially the sponsors!). Whatever it was, it made the whole weekend feel less like a competition and more like a community. I can't wait to go back.
If you enjoyed this post, please consider Giving Minerva a Like on Devpost, and Starring the GitHub repository!
Also read my other blog post about CalHacks 12 (we didn't win, but we built something I'm really proud of)