Fyma
AI Analytics

From Ten Images to Production: Rapid Custom Detectors Explained

October 7, 2025

From Ten Images to Production: Rapid Custom Detectors Explained

Traditional computer vision projects stall because gathering and labelling thousands of real-world images takes weeks. Meanwhile, operational needs go unmet. Fyma offers an alternative: training and deploying custom detectors within 24–48 hours using synthetic data generation from approximately ten representative images.

Why Traditional Custom Detection Is Slow (and Costly)

Traditional approaches stall due to extensive data collection and labelling requirements. Gathering thousands of real-world images, annotating each one, and training a model that generalises well can take months. During that time, operational questions go unanswered and budgets are consumed before a single insight is delivered.

Step 1: Curate Around Ten Seed Images

Begin with carefully selected images covering the object's key variations - different angles, branded items, specific equipment or containers, and mobility devices. Include edge cases like partial occlusion and night lighting where possible. For organisations without an existing image library, camera feed extraction works equally well as a starting point.

Step 2: Generate Synthetic Data - Thousands of Variations

From those seed images, the system synthesises controlled variations covering scale, rotation, backgrounds, lighting conditions, weather, occlusion, and motion blur. This expands coverage of rare and difficult conditions far beyond what manual data collection could achieve in the same timeframe.

Step 3: Train, Validate and Benchmark - Fast

The process splits data into training and validation sets, incorporates real negatives, and trains detectors optimised for the specific camera perspectives in use. Both offline and online validation are included before any detector reaches production.

Step 4: Deploy and Monitor

Deployment occurs via the Fyma platform with real-time detection visualisation, region-of-interest drawing, and metric exports to business intelligence tools. Operations teams see results from day one without any infrastructure changes.

Step 5: Iterate with Active Learning

Hard examples surface naturally during real-world operation. They are relabelled without capturing personal information and integrated into updated training sets, improving accuracy continuously without requiring another manual data collection cycle.

Real-World Examples

Airport trolley and queue detection. Legal & General asset-specific amenity tracking. RV entrance counting at event venues. Micro-mobility distinction between e-scooters and standard bikes. Each of these required fewer than 20 seed images and went live within 48 hours of the initial brief.

Why This Matters for CRE and Complex Estates

Rapid value delivery, lower data requirements, flexibility to track changing operational priorities, portfolio-wide consistency, and privacy preservation through count-only outputs. The combination makes custom computer vision accessible to property teams without dedicated ML engineering resources.

"Fyma has been independently tested against Avigilon (Motorola Solutions) to achieve 97% accuracy - and above."

Related Articles

See what your cameras
already know.

Most buildings already have the infrastructure. They just don't have FYMA connected to it yet.

Book a Demo