Tech

The technical debt that cost 50K€.

Two years on autopilot, a working FinTech product, zero maintenance. Until the day everything collapsed.

April 7, 2026

QuickFund was profitable from month one. Micro-loans, automatic repayments, homemade scoring. It worked. So I stopped touching it. For two years. That's the worst technical decision I ever made.

€50K in losses because I refused to maintain code that was running. Not a hack, not the market. Just technical laziness.

Incident timeline

Early 2024

QuickFund launch

Working MVP. Basic scoring, minimal interface, semi-automated process. Profitable from month 1.

Mid 2024

Autopilot mode

The product runs, I focus on other subsidiaries. Zero updates, zero refactoring, zero serious monitoring.

Late 2024

First signals

A few bugs reported by clients. Response times increasing. I patch urgently, never fix the root cause.

Early 2025

The incident

Critical data loss. Repayments not recorded. Clients billed twice. Scoring returning inconsistent results.

Q1 2025

Radical decision

I stop everything. Full stop. No new loans. Complete audit of the code, database, and processes.

2025-2026

Restructuring

Complete rewrite. New stack, new scoring, new processes. Today's QuickFund has nothing in common with the initial MVP.

Loss breakdown

~€18K

Lost repayments

Payments received but not recorded in the system

~€12K

Double billing

Clients charged twice, manual refunds

~€15K

Unrecoverable loans

Faulty scoring approving risky profiles

~€5K

Restructuring cost

Time, tools, external audit

~€50K

Total

01

What caused the problem

Technical debt is code you know is fragile but don't fix. Tests you don't write. Monitoring you don't set up because "it works." On QuickFund I had accumulated:

!

Zero automated tests — every deployment was a gamble

!

Database without integrity constraints — duplicates everywhere

!

No logging system — when the bug hits, you don't know when or why

!

API without versioning — one change broke everything silently

!

No automated backup — the database was the single point of failure

02

What I did

The hardest decision is stopping a product that's making money. Telling clients I'm suspending new loans. But it was that or keep patching on rotten code.

01

Complete audit — 3 weeks to map every component

02

Scoring rewrite — algorithm reviewed, tested, manually validated

03

Database migration — clean PostgreSQL with constraints and indexing

04

Test implementation — 85%+ coverage before any redeployment

05

Real-time monitoring — alerts on every financial anomaly

06

Deployment process — CI/CD, staging, mandatory human validation

03

The lessons

Five rules I now apply in every company in the group.

01

A running product isn't a maintained product

Working today guarantees nothing for tomorrow. Code ages, dependencies too.

02

No code in prod without tests

If you can't test it, you can't deploy it.

03

Monitoring isn't optional

You need to know in real-time what's happening. Especially when you're handling money.

04

20% of dev time = maintenance

If you don't plan maintenance, the incident plans it for you.

05

Cutting beats patching

When the debt is too deep, you need the courage to stop. Not keep duct-taping.

The takeaway

€50K because I thought "it runs" meant "it's solid." It was just luck. Today QuickFund is more robust than ever, but I could have avoided all of this with 2 hours a week. Technical debt is a loan on your future self, and the interest is brutal.

GL