I'm a new grad software engineer who is interested in building scalable backend systems. I'm passionate about making thoughtful architectural decisions and learning from every line of code I write.
Under each project, I share not just what I've built, but why I built it that way. Under experience, I provide a roadmap with my learnings and insights from each stage of my career up to this point.
When I'm not coding, you'll find me cheering for the Lakers 🏀, watching Ferrari race on Sundays 🏎️, or discovering new music on Spotify.
A deep dive into distributed systems, polyglot architecture, and the trade-offs of modern software engineering through building a basketball statistics platform using microservices.
Most of my previous projects were about building features. This project was about building a system from the ground up. I wanted to move past monolithic architecture and actually make decisions regarding perfomance tradeoffs and system design. I built a platform to track NBA player stats and predict future performance, but the real challenge was how these services talked to each other.
I went with a Polyglot Architecture because I wanted the right tool for the job, not just
the tool I knew best. I used
Java (Spring Boot) for the Security and Stats services because Spring boot is the de
facto for microservices. Also, tools like Netflix Eureka, Spring Cloud were very helpful
for service discovery, load balancing, and fault tolerance.
But for the nba-fetcher and prediction services, I used Python. Trying to do data
science or heavy API scraping in Java
felt like fighting the language, while Python’s ecosystem made it seamless.
One of the biggest trade-offs I made was implementing gRPC for internal communication
between my Security and Stats
services. I could have used standard REST, but I wanted to practice low-latency
communication. Using Protobufs felt like
a win because it forced me to define a strict contract between services, instead of
guessing what an API response looks
like.
When you have five services running at once, you can’t just keep it all in your head.
Documentation as a Tool: I started using ADRs (Architectural Decision Records). For
example, I documented why I chose Redis Pub/Sub as the event bus for my AI Agent. I
wrote down the consequences of that decision (services are decoupled), and the tradeoffs
(No message persistence).
Visualizing Complexity: I adopted the C4 Model for my diagrams. I realized that if I
couldn't draw the data flow at a
Container and Component level, I didn't actually understand my own system.
Testing & Resilience: I moved away from just testing happy paths. I spent a lot of time
writing unit tests in JUnit
and Mockito to handle what happens when a service goes down. I used Docker Compose to
manage the environment, ensuring
that my PostgreSQL and Redis instances were always configured correctly across dev and
prod.
Microservices are a double-edged sword. They give you amazing flexibility, but they tax you with complexity in service discovery and data consistency. Navigating that tax made me a much better developer. I stopped just writing code and started thinking about how systems survive in the real world.
Full-stack system to automatically collect, process, and deliver personalized financial updates via backend automation and a lightweight React web interface.
I chose FastAPI because I needed native async/await support for concurrent API calls to Finnhub. Flask's sync nature would have required workarounds like Celery, adding unnecessary complexity. Django felt too heavy for a focused API service. FastAPI's automatic OpenAPI documentation and Pydantic integration meant I could enforce data contracts at compile-time rather than discovering issues at runtime. This was important when handling financial data where type safety matters.
I evaluated SendGrid and Mailgun for email delivery, but AWS SES made more sense for three reasons: (1) significantly lower cost at scale, (2) tight integration with other AWS services I planned to use (Lambda for future expansion), and (3) better deliverability reputation for transactional emails. For logging, I chose S3 over local filesystem or CloudWatch because I wanted immutable audit trails that survive container restarts, and S3's lifecycle policies let me automatically archive old logs without manual intervention.
I strictly separated Routers → Services → Repositories to avoid the common mistake of embedding SQLAlchemy queries directly in business logic. This meant I could swap PostgreSQL for another database (or add caching) by only touching the repository layer. The Service layer handles password hashing and ETL orchestration without knowing how User objects are stored. This separation also made testing easier, since I could mock repositories without spinning up a database.
The ETL pipeline needed to fetch quotes and profiles for potentially dozens of tickers.
Running these sequentially would have caused unacceptable latency. Using
asyncio.gather let me fire all requests concurrently, cutting execution
time by. I used asyncio.to_thread for the Finnhub SDK (which is
synchronous) to
prevent blocking FastAPI's event loop (a mistake I made in early iterations that caused
the
entire API to hang during data fetches)
A high-fidelity implementation of a Redis server in Python, featuring a multi-threaded, non-blocking architecture adhering to the Redis Serialization Protocol (RESP).
After building the financial pipeline, I realized I was abstracting away critical infrastructure. I understood FastAPI and PostgreSQL at a high level, but didn't truly grasp how databases handle concurrent writes or how replication actually works. Building Redis forced me to reason about locking primitives, network protocols, and data persistence from first principles. It's the difference between using a tool and understanding the tool. This project taught me why certain database design decisions exist (like why Redis is single-threaded for most operations).
I chose a multi-threaded architecture (one thread per client) over an event loop like
asyncio
because I wanted to learn traditional concurrency primitives. Redis's actual
implementation
uses a single-threaded event loop for different reasons (CPU-bound operations), but for
a
learning exercise, managing threading.Lock and
threading.Condition
gave me hands-on experience with race conditions and deadlocks. I hit both problems
early: clients would hang during BLPOP, and concurrent ZADD operations would corrupt
data
until I fixed my locking strategy.
BLPOP was the most challenging feature. I needed threads to sleep efficiently when lists
were
empty, then wake immediately when data arrived. My first attempt used a polling loop
(checking every 100ms), which was wasteful and had high latency. The correct solution
was
threading.Condition, where threads wait on a condition variable, and RPUSH
broadcasts
a notification. This pattern taught me why condition variables exist and how databases
handle blocked queries without burning CPU.
Implementing master-replica replication revealed why distributed databases are complex. I had to handle partial command propagation (what if the replica dies mid-sync?), implement PSYNC for incremental updates, and track replica acknowledgments for WAIT commands. The hardest bug was a deadlock in WAIT. I was holding a data lock while waiting for replica responses, blocking all other operations. The fix required releasing locks before network I/O, which is a fundamental pattern in databases.
A comprehensive data warehousing and analytics solution demonstrating industry best practices by building an ELT pipeline and generating insights from consolidated sales data.
Most backend engineers interact with databases through ORMs, which is fine for OLTP workloads but fails for analytics. I built this project to understand how data warehouses differ fundamentally from transactional databases. Different query patterns (aggregations vs point lookups), Different schemas (star schema vs normalized), and Different optimization strategies (columnar storage, materialized views). This knowledge directly applies when designing APIs that need to serve analytical queries efficiently.
I chose Medallion (Bronze/Silver/Gold) over alternatives like just loading directly into a star schema because it provides clear boundaries for data quality improvements. Bronze preserves raw source data (critical for auditing), Silver handles cleansing without business logic creep, and Gold serves analytics without impacting transformation pipelines. When data quality issues emerged in the Silver layer, I could fix them without re-ingesting from source or breaking Gold-layer reports.
I used ELT (Extract-Load-Transform) instead of ETL because SQL Server can process transformations faster than Python. Loading raw CSVs into Bronze, then transforming in-database using SQL, leverages the database's query optimizer and parallelism. ETL (transforming before loading) makes sense when the source system is too slow or you need complex logic that SQL can't express, but for structured CSV data, in-database transformations were 5x faster in my tests.
A modern, responsive Single-Page Application (SPA) built for a local dental practice focusing on user experience, professional design fidelity, and a maintainable component architecture.
I evaluated three options: vanilla HTML/CSS/JS, React, and Vue. Raw HTML would have meant
duplicating features and components across the page and manually managing DOM updates
for
the
appointment form. Vue was tempting for its simplicity, but React's ecosystem (more
libraries, better TypeScript support, larger community) made it the pragmatic choice.
The
component-based model meant I could build Employee.js once and render it
with
different props—impossible with static HTML without templating engines.
This is a narrative of my growth, architectural learnings, and pivotal career decisions that have led me to specialize in scalable backend systems.
Incoming Software Engineer
I joined Trojan Marketing Group my first year at USC to bridge my interests in technology and business. TMG works with four clients each year, spending one semester developing data-driven strategy and the next executing the campaign. As I grew from Technologist to Director of Technology, I focused on building tools that made our strategic and creative decisions smarter.
While working with Mailmodo, I noticed our team needed better insights from YouTube analytics. I built an internal Python scraper using the YouTube Data API to analyze engagement metrics and SEO performance. This year, I expanded it into a React-based dashboard so that any member could input a channel ID and instantly retrieve insights like top-performing videos, tag frequency, and engagement patterns with no coding required.
For YATÉ, a beverage brand with strong musical roots, I helped develop a full-stack web app that let users log in through Spotify. We analyzed their listening data to recommend personalized drink pairings during on-campus tastings, creating an interactive and memorable brand experience.
These projects showed me how much I enjoy building technology that directly influences marketing strategy and audience engagement. They weren’t just side projects. They were technical initiatives driven by creative discussion and client insight.
I chose wealth management because of the finance component of my Computer Science and Business Administration degree. I wanted to see how financial advising actually works in practice. I learned about how markets move, how clients think, and how data shapes real investment decisions.
Wealth management is extremely dynamic. Markets shift, clients shift, and strategies must adjust constantly. During my time there, I worked on:
What I enjoyed was seeing how technology and data directly influence decisions that impact people’s financial lives. But I also realized something important about myself: I didn’t resonate with the client-facing side of the role.
I didn’t want to be the person presenting insights. I wanted to be the one building the systems that generate those insights.
At the bank, the engineers build the platforms that traders and advisors rely on every day. And that’s where I found my next role as a software engineer at the bank.
At NASA JPL, I worked on the NASA Web Mod initiative to migrate the existing science.nasa.gov website into a new CMS. It was my first time contributing to a large, public-facing codebase, and I remember immediately feeling the scale and rigor of the engineering environment around me.
A core part of my role was ensuring accessibility compliance. This meant verifying keyboard navigation flows, evaluating screen reader behavior, and making sure the platform supported users with different abilities. It was slow, methodical, and required a level of precision I hadn’t experienced before.
During this time, I gained exposure to:
But the most impactful realization was that I was increasingly drawn to what sits behind the interface: the systems, architecture, data flow, and logic that allow these platforms to operate reliably at scale.
This experience is where I recognized that I want to grow further into backend engineering, working with systems, APIs, and architecture rather than solely focusing on the front-facing product surface.
Sunkist Dental was my first real introduction to web development with real users in mind. Before writing any code, I spent time understanding the clinic’s patient base. Most patients were older adults who preferred calling the front desk to schedule appointments, which highlighted the first major pain point:
The second issue was information inconsistency. Because the clinic was small, business hours and service details weren’t regularly updated on Google or Apple Maps. Patients often had to call just to confirm basic information.
So my task became clear: design a clean, friendly, and easy-to-use website that allows patients to book appointments online and view accurate clinic information without requiring the staff to regularly maintain it.
Through this project, I learned React.js, Firebase, EmailJS, version control, Figma, and how to translate real-world business constraints into product decisions. It was also my first deep dive into frontend design, which was rewarding, but it helped me realize that I’m more energized by logic, data flow, and system architecture than visual styling alone.
Measurable Impact: The online booking system reduced front desk workload by ~20% and contributed to 50+ new patient appointments, proving to me how software can directly improve operations and revenue.
View this website in the Projects section.
Here are some of my interests outside of coding!
I'm always open to discussing new opportunities, interesting projects, or just chatting about technology!