BLOG

From Ten Images to Production: Rapid Custom Detectors Explained

Why traditional custom detection is slow (and costly)

Traditional computer vision projects stall because collecting and labelling thousands of real-world images takes weeks. Meanwhile, the operational question—“Can we find this specific trolley, RV or e-scooter?”—remains unanswered. The result: missed windows, bloated budgets and a sceptical board.

A breakthrough approach: Fyma’s computer vision models

Fyma’s approach is different. With about ten representative images, we can train and deploy a custom detector – often within 24–48 hours – using a rigorous synthetic data pipeline. Here’s how it works.

Step 1: Curate ~10 seed images that explain the “needle”

We start with a small, carefully chosen set that covers the essence of the object or behaviour: a luggage trolley under different angles, a branded RV, a particular waste container, a mobility scooter with a basket. Variety beats volume. If you can provide a few edge cases (partial occlusion, night lighting), even better.

For those who don’t have images – no problem: we can often capture frames from your existing camera feeds.

Step 2: Generate synthetic data—thousands of variations

Instead of waiting weeks for real-world examples, we synthesise them. Using the seed images, Fyma’s pipeline creates controlled variations: scale, rotation, backgrounds, lighting, weather, partial occlusion and motion blur. This gives the model the “muscle memory” to recognise the object in your actual environment.

Why synthetic works: it expands coverage of rare conditions (rain on glass, dusk glare) and eliminates bias towards a single viewpoint or time of day.

Step 3: Train, validate and benchmark—fast

We split synthetic and real frames into training and validation sets, add a small number of real negatives (lookalikes that are not the target), and train an object detector optimised for your camera perspectives. Validation happens both offline (precision/recall, confusion matrices) and online against live feeds to check for drift.

Governance: We track versions, metrics and ROI regions of interest (ROIs) so you know where and how the detector is used.

Step 4: Deploy to production and monitor performance

Deployment is push-button via Fyma’s platform. You can see detections and counts in context, draw ROIs and export metrics to your BI tools. We monitor precision/recall by zone and time of day, flag anomalies, and iterate quickly if conditions change.

Step 5: Iterate with active learning

As the system runs, it surfaces “hard examples.” A small set of these gets re-labelled (without PII) and folded back into the training set. This is how detectors stay robust when signage changes, layouts shift or seasonal lighting arrives.

Real-world examples 

  • Airport operations: detect and count specific trolleys and queue states to balance lanes.

  • Legal & General use case: asset-specific amenity tracking to validate upgrades.

  • RV detection: run counts at entrances to plan parking allocation and wayfinding.

  • Micro-mobility: identify e-scooters vs bikes to improve storage and policy signage.


Accuracy note:
Fyma has been independently tested against Avigilon (Motorola Solutions) to achieve 97% accuracy – and above. Where formal benchmarks are needed, we share methods and run controlled pilots.

Why this matters to CRE and complex estates

  • Speed to value: answer narrow operational questions this week, not next quarter.

  • Lower data burden: no multi-month labelling effort.

  • Flexibility: track the things that actually change your decisions, not just generic classes.

  • Portfolio consistency: roll the same detector across sites and compare like with like.

  • Privacy: detectors output counts and timings; no identity data is stored or shown.

Choosing good seeds: a quick checklist

  • Cover front, side and partial views.

  • Include scale (close and far) and lighting (day, evening).

  • Capture context (on the floor, by a wall, near glass).

  • Add negatives: objects that are similar but not the target.

  • Ensure image quality is representative of your cameras.


The bottom line is that with synthetic data and a tight feedback loop, custom detection becomes a two-day sprint – not a two-month project.

Stay up to date with Fyma

Subscribe to our newsletter to hear more about our product, industry updates and more.

Want to see Fyma in action?

FAQ

Frequently asked questions

BOOK A DEMO

Get Started with Fyma Today

Ready to unlock data-driven insights and maximize your property’s potential? Request Your Free Demo Now

woman in black long sleeve shirt sitting beside woman in gray sweater

*Recorded as the most accurate computer vision tool on the market based on independent third party verifications

Website built by Darwin

This is a staging environment