AI-native content studio

We produce semi-automated YouTube content across 5 channels — science, psychology, history, gardening.

Built on Google Cloud. Scaling to 10+ channels. 4+ videos shipped per week.

5
active YouTube channels
4+
videos published per week
~9min
avg episode length, script-to-publish in 1 day
$1.40
marginal cost per 10-min episode (GPU + LLM APIs)

Channels

Five distinct voices. One production system.

Each channel owns its aesthetic, cadence, and audience. The pipeline adapts per brand.

Frugal Forward

Personal finance & anti-consumer wisdom

Hand-drawn cartoon style. Frank breaks down money myths with real math.

FF

The Lazy Grocer

Practical gardening & thrift wisdom

Maggie shares dollar-store hacks, permanent vegetables, ergonomic tricks.

LG

Inner Mechanics

Psychology & human behavior

Dave dives into narcissism, trauma patterns, relational mechanics.

MH

Lived

Immersive historical storytelling

Watercolor POV. You are inside the moment — Rasputin, Dyatlov Pass, Whitechapel 1888.

YA

Rhymes of History

Patterns across time — pilot stage

Chronos persona. Pattern-recognition across historical cycles.

RH Pilot

Showcase

Five recent episodes. One per channel. All produced by the pipeline.

Each video was researched, scripted, illustrated, voiced, and assembled by our automated stack — script-to-publish in ~1 working day.

Frugal Forward

Her $42K Paycheck Cost Us $4,000. Two-Income Trap.

A real-math breakdown of the second-income myth. 5:48, hand-drawn cartoon style, Frank persona.

The Lazy Grocer

Dollar Tree's NEW $1.50 Garden Products — What Actually Works (and 3 to Skip)

Practical gardening review with field-tested verdicts. 8:31, painterly photoreal visuals, Maggie persona.

Inner Mechanics

Ferrari Brain. ADHD.

Reframing ADHD through neurobiology and lived experience. 5:34, minimalist cartoon, PMC-cited.

Lived

Rasputin — The Night He Wouldn't Die

2nd-person POV of the December 1916 Yusupov Palace assassination. 6:23, watercolor Rackham/Turner palette.

Lived

You Are on Dyatlov Pass: From Nightfall to Dawn

Immersive reconstruction of the 1959 Dyatlov Pass incident. 6:03, same pipeline, different story.

Technology

How one 10-minute episode actually gets made.

Fourteen distinct models and tools, ~6,000 lines of orchestration code, ~$1.40 marginal cost per finished episode. AI-native means purpose-built model per stage — not a single-chatbot wrapper.

01ResearchNotebookLM 02ScriptClaude Opus 03PromptsGPT-5.4 04ImagesFLUX + LoRA 05MotionLTX-Video 06VoiceQwen3-TTS 07AssembleFFmpeg + publish
01

Research — NotebookLM

An 11-section structured interview protocol runs over 50+ peer-sourced citations per topic (PMC, USDA, NIMH). Three phases: baseline → deep dive → synthesis, with a blind-spot pass and manual retry for hallucination-prone questions. Output: ~43k characters of source-backed research per episode.

02

Script — Claude Opus + custom playbook

Channel-specific doctrine files enforce voice, cadence, and narrative arc. Reverse-countdown structure (10 items → killer #1), seven-step opening hook formula. Output: ~120 segments × 15-17 words = 9-10 min target runtime at 207-210 wpm. Eight-pass internal quality loop before sign-off.

03

Visual prompts — GPT-5.4 chunked

122 cinematic-storyboard prompts per episode, generated in 2×61 chunks (single-pass times out). Per-channel visual grammar bakes in palette, hero composition, anti-AI keywords, material-specific detail. ~$0.47 OpenAI cost per episode for prompts alone.

04

Images — FLUX-fp8 + custom-trained LoRA

Four character LoRAs trained in-house — Frank, Maggie, Dave, Blob — one per persona. FLUX.1-dev-fp8 base, CFG 3.5, 20-30 steps. Renders on Spot A100 80GB at ~2 sec/image, 122 images per episode. Character identity holds across all 122 frames via LoRA conditioning, not per-shot face-swap.

05

Motion — LTX-Video 13B Distilled FP8

First segment of every episode is a 3-keyframe image-to-video stitch — a ~15-second motion hook to retain audience past the 8-second drop-off. Speed-matched to the TTS waveform so cuts land on the beat.

06

Voice — Qwen3-TTS-12Hz-1.7B + custom clones

A 30-60 second reference per channel produces a stable voice clone. 122 segments are TTS'd individually, gap-trimmed, sequenced — no monolithic generation, because per-segment retry is cheaper than re-rolling the whole episode.

07

Assembly + publish — FFmpeg + tooling

FFmpeg-driven assembly with SFX (riser cues, beat-aligned punches at 120 BPM, -18 LUFS body / -14 LUFS punches). A vertical 9:16 short — 14 scenes, seamless loop hook, 30s — is generated from the same source. Three thumbnail variants. Manual upload + community post 24h before publish.

Why this is genuinely difficult

Infrastructure

Built on Google Cloud

Every image, every video frame, every voice clip is rendered on Google Cloud GPUs. Compute Engine with L4 for fast iteration, Spot A100 for batch production. NotebookLM for source-backed research. The entire pipeline lives in one Google Cloud project.

  • Compute Engine — 2× A100 80GB Spot VMs at production cadence, ~400 GPU-hours/month
  • Cloud Storage — ~500 GB media archive, version-controlled, lifecycle-tiered
  • NotebookLM — source-backed research, 50+ citations per topic
  • Vertex AI — pipeline orchestration, batch jobs, model serving for in-house LoRAs
  • IAM + Quotas — fine-grained cost and resource control across the project

Target monthly burn at production cadence

~$500–600 / mo

GPU compute (~$480) + storage (~$10) + Vertex AI orchestration. Credits would cover ~18–24 months of operations at the target cadence — directly funding the scaling from 5 to 10+ channels.

Roadmap

From 5 channels to 10+ by end of 2026.

Q2 2026
Now

5 channels active. Pipeline stabilized on Spot A100. 4 videos/week cadence.

Q3 2026
Next

LoRA character consistency. +2 channels. Multi-language voices.

Q4 2026
Goal

10+ channels. EN/RU localization. Multi-platform distribution.

2027
Vision

License pipeline to creators. Open-source non-proprietary parts.

Founder

One operator. One vision. Built in the open.

Valerii Serko, founder of S.Ler Group

Valerii Serko

Founder & Engineer

Engineering background. Founded S.Ler Group in 2026 to build AI-native content infrastructure — production tooling that turns peer-reviewed research into watchable, sourced video at consumer-facing scale. Based in Almaty. The pipeline you see on this site is the product of 8+ months of daily iteration on every stage from research protocol to final assembly.

Get in touch

Partnerships, credits, investment.

Building something specific? Reach out directly.

valerii.serko@slergroup.com

Valerii Serko, founder

Headquarters

S.Ler Group
Vasilia Klochkova 105
Almaty, 050057
Kazakhstan