TH
HomeAboutProjectsClientsBlogContact
ENID
ENID
Taufik Hidayat

Full Stack Developer building modern web experiences.

Navigation

  • Home
  • About
  • Projects
  • Clients
  • Blog
  • Contact

Connect

© 2026 Taufik Hidayat. All rights reserved.


  1. Home
  2. Projects
  3. Smartgenix
🇬🇧 United KingdomAutomotiveMarch 2024

Smartgenix

An AI-powered platform for generating marketing copy and long-form content.

Smartgenix screenshot 1
Fullscreen

About this project

Smartgenix is a SaaS platform based in the UK that helps marketing teams and content creators produce high-quality copy at scale using OpenAI's GPT-4 models. Users can generate blog posts, ad copy, email campaigns, and social media content from structured templates, then refine outputs with tone controls and version history — all without leaving the browser.

20+
Content Types
< 3s
Avg. Gen Time
50+
Templates
40%
Token Savings

Key Responsibilities

  • 1Built the Next.js 14 frontend with App Router, Server Actions, and streaming UI for real-time token output.
  • 2Integrated Vercel AI SDK with OpenAI GPT-4o for structured generation with JSON schema enforcement.
  • 3Designed the Supabase schema for users, workspaces, templates, and content version history.
  • 4Implemented subscription billing with Stripe and usage-based token quota enforcement per plan tier.
  • 5Created a visual template builder that allows non-technical users to define prompt structures with variable slots.
  • 6Set up Vercel Edge Functions for low-latency streaming responses across global regions.

Challenges

Streaming responses with partial save states

Users frequently navigate away mid-generation. Built a custom streaming hook that persists partial output to Supabase in 500ms chunks, so content is never lost even if the connection drops.

Controlling GPT output quality at scale

Raw LLM output varied wildly across user inputs. Introduced a two-pass generation pipeline: first a structured JSON plan, then a prose expansion step — increasing output consistency by ~60% based on user ratings.

Cost control for token-heavy workloads

Uncapped API usage made the unit economics unsustainable in beta. Implemented a client-side token estimator that warns users before submission, plus server-side hard limits with graceful truncation.

Accomplishments

  • Reduced average content drafting time from 45 minutes to under 5 minutes per piece for early users.
  • Achieved 40% reduction in OpenAI token spend through prompt caching and smart truncation strategies.
  • Grew to 200+ active users within 6 weeks of launch with zero paid marketing.
  • Received direct feature-request engagement from 3 UK marketing agencies for white-label licensing.
Date
March 2024
Country
🇬🇧United Kingdom
Sector
Automotive

Tech Stack

Next.jsOpenAI APIVercel AI SDKSupabaseTailwind CSS
Previous
Daleel
Next
EVERCAM