AI technology company

We build AI infrastructure for video content production — and deploy it across 5 live YouTube channels.

Proprietary pipeline: research → script → images → voice → assembly → publish. Script-to-publish in one working day. Built on Google Cloud.

5
active YouTube channels
4+
videos published per week
~9min
avg episode length, script-to-publish in 1 day
8+
months of daily pipeline iteration

What we are

S.Ler Group is an AI technology company. We design, build, and operate proprietary AI infrastructure for video content production. Our technology stack covers every stage of production — from automated research and scriptwriting to AI-generated visuals, voice synthesis, and final assembly.

What we build

A fully integrated production pipeline — purpose-built models at each stage, orchestrated by ~6,000 lines of custom code. The result: a finished, publish-ready YouTube episode in one working day. We operate 5 live channels as the primary deployment of this technology.

Revenue model

YouTube ad monetization and brand partnerships across our channel portfolio. Digital-native, scales with audience — no physical production costs, no studio overhead. Pipeline efficiency enables margins unavailable to traditional content studios.

Channels

Five live deployments. One pipeline.

Each channel is a live deployment of the same technology stack — different persona, different niche, same infrastructure. Finance, gardening, psychology, history. One pipeline adapts to all.

Frugal Forward

Personal finance & anti-consumer wisdom

Hand-drawn cartoon style. Frank breaks down money myths with real math.

FF

The Lazy Grocer

Practical gardening & thrift wisdom

Maggie shares dollar-store hacks, permanent vegetables, ergonomic tricks.

LG

Inner Mechanics

Psychology & human behavior

Dave dives into narcissism, trauma patterns, relational mechanics.

MH

Lived

Immersive historical storytelling

Watercolor POV. You are inside the moment — Rasputin, Dyatlov Pass, Whitechapel 1888.

YA

Rhymes of History

Patterns across time — pilot stage

Chronos persona. Pattern-recognition across historical cycles.

RH Pilot

Showcase

Five recent episodes. One per channel. All produced by the pipeline.

Each video was researched, scripted, illustrated, voiced, and assembled by our automated stack — script-to-publish in ~1 working day.

Frugal Forward

Her $42K Paycheck Cost Us $4,000. Two-Income Trap.

A real-math breakdown of the second-income myth. 5:48, hand-drawn cartoon style, Frank persona.

The Lazy Grocer

Dollar Tree's NEW $1.50 Garden Products — What Actually Works (and 3 to Skip)

Practical gardening review with field-tested verdicts. 8:31, painterly photoreal visuals, Maggie persona.

Inner Mechanics

Ferrari Brain. ADHD.

Reframing ADHD through neurobiology and lived experience. 5:34, minimalist cartoon, PMC-cited.

Lived

Rasputin — The Night He Wouldn't Die

2nd-person POV of the December 1916 Yusupov Palace assassination. 6:23, watercolor Rackham/Turner palette.

Lived

You Are on Dyatlov Pass: From Nightfall to Dawn

Immersive reconstruction of the 1959 Dyatlov Pass incident. 6:03, same pipeline, different story.

Technology

How one 10-minute episode actually gets made.

Fourteen distinct models and tools, ~6,000 lines of orchestration code, ~$1.40 marginal cost per finished episode. AI-native means purpose-built model per stage — not a single-chatbot wrapper.

01ResearchNotebookLM 02ScriptClaude Opus 03PromptsGPT-5.4 04ImagesFLUX + LoRA 05MotionLTX-Video 06VoiceQwen3-TTS 07AssembleFFmpeg + publish
01

Research — NotebookLM

An 11-section structured interview protocol runs over 50+ peer-sourced citations per topic (PMC, USDA, NIMH). Three phases: baseline → deep dive → synthesis, with a blind-spot pass and manual retry for hallucination-prone questions. Output: ~43k characters of source-backed research per episode.

02

Script — Claude Opus + custom playbook

Channel-specific doctrine files enforce voice, cadence, and narrative arc. Reverse-countdown structure (10 items → killer #1), seven-step opening hook formula. Output: ~120 segments × 15-17 words = 9-10 min target runtime at 207-210 wpm. Eight-pass internal quality loop before sign-off.

03

Visual prompts — GPT-5.4 chunked

122 cinematic-storyboard prompts per episode, generated in 2×61 chunks (single-pass times out). Per-channel visual grammar bakes in palette, hero composition, anti-AI keywords, material-specific detail. Fully automated, no manual prompt editing per episode.

04

Images — FLUX-fp8 + custom-trained LoRA

Four character LoRAs trained in-house — Frank, Maggie, Dave, Blob — one per persona. FLUX.1-dev-fp8 base, CFG 3.5, 20-30 steps. Renders on Spot A100 80GB at ~2 sec/image, 122 images per episode. Character identity holds across all 122 frames via LoRA conditioning, not per-shot face-swap.

05

Motion — LTX-Video 13B Distilled FP8

First segment of every episode is a 3-keyframe image-to-video stitch — a ~15-second motion hook to retain audience past the 8-second drop-off. Speed-matched to the TTS waveform so cuts land on the beat.

06

Voice — Qwen3-TTS-12Hz-1.7B + custom clones

A 30-60 second reference per channel produces a stable voice clone. 122 segments are TTS'd individually, gap-trimmed, sequenced — no monolithic generation, because per-segment retry is cheaper than re-rolling the whole episode.

07

Assembly + publish — FFmpeg + tooling

FFmpeg-driven assembly with SFX (riser cues, beat-aligned punches at 120 BPM, -18 LUFS body / -14 LUFS punches). A vertical 9:16 short — 14 scenes, seamless loop hook, 30s — is generated from the same source. Three thumbnail variants. Manual upload + community post 24h before publish.

Why this is genuinely difficult

Infrastructure

Built on Google Cloud

Every image, every video frame, every voice clip is rendered on Google Cloud GPUs. Compute Engine with L4 for fast iteration, Spot A100 for batch production. NotebookLM for source-backed research. The entire pipeline lives in one Google Cloud project.

  • Compute Engine — 2× A100 80GB Spot VMs at production cadence, ~400 GPU-hours/month
  • Cloud Storage — ~500 GB media archive, version-controlled, lifecycle-tiered
  • NotebookLM — source-backed research, 50+ citations per topic
  • Vertex AI — pipeline orchestration, batch jobs, model serving for in-house LoRAs
  • IAM + Quotas — fine-grained cost and resource control across the project

Target monthly burn at production cadence

~$500–600 / mo

GPU compute (~$480) + storage (~$10) + Vertex AI orchestration. Credits would cover ~18–24 months of operations at the target cadence — directly funding the scaling from 5 to 10+ channels.

Roadmap

From 5 channels to 10+ by end of 2026.

Q2 2026
Now

5 channels active. Pipeline stabilized on Spot A100. 4 videos/week cadence.

Q3 2026
Next

LoRA character consistency. +2 channels. Multi-language voices. Platform early access — first external creators on the pipeline.

Q4 2026
Goal

10+ channels. EN/RU localization. Multi-platform distribution.

2027
Vision

License pipeline to creators. Open-source non-proprietary parts.

Founder

One operator. One vision. Built in the open.

Valerii Serko, founder of S.Ler Group

Valerii Serko

Founder & Engineer

Engineering background. Founded S.Ler Group in 2026 to build AI-native content infrastructure — production tooling that turns peer-reviewed research into watchable, sourced video at consumer-facing scale. Based in Almaty. The pipeline you see on this site is the product of 8+ months of daily iteration on every stage from research protocol to final assembly.

Get in touch

Partnerships, credits, investment.

Building something specific? Reach out directly.

valerii.serko@slergroup.com

Valerii Serko, founder

Headquarters

S.Ler Group
Vasilia Klochkova 105
Almaty, 050057
Kazakhstan