Feed the model a 4K broadcast at 120 fps; it returns a heat-map of every rehearsed move, tagging each frame with the probability that the action was choreographed. Last season, the Bundesliga’s 22 clubs uploaded 612 full-length recordings; the network located 1 047 set-piece routines with 94.3 % accuracy, cutting manual tagging from 14 staff-hours to 11 minutes.
The pipeline: YOLOv8 locates all 22 jerseys; Social-LSTM predicts their trajectories 1.2 s ahead; a Transformer encoder classifies the emerging geometry into 19 labelled schemes (overlap-run, third-man screen, under-lap drag, etc.). Training took 38 h on 8×A100 GPUs using 1.8 M labelled frames. Inference on a laptop RTX 4060 runs at 173 fps, so a 90-minute file is processed before the post-match press conference begins.
Coaches receive a JSON dump: timestamp, scheme-ID, participation vector, success flag. Hoffenheim used the data to discover that rivals score 0.28 xG per corner when defending with a hybrid zonal-man mark; they switched to pure zonal and conceded 0.09 xG in the next 9 fixtures. Licensing the API costs €2 700 per month; the club estimates it saved €420 k in scouting hours and gained 4 extra points that kept them in the top half.
Calibrate camera angle to generate a top-view heat map in 90 seconds

Point the lens 12 m above halfway-line, tilt 35° down, lock tripod; feed two stills into OpenCV, run findHomography with 12 reference marks (corner-flag bases, penalty arcs, center-circle hashes), accept RANSAC confidence >0.9; homography matrix converts every pixel coordinate to centimeter-grade XY on virtual 105×68 m plane, ready for density accumulation.
Press Calibrate; 1.3 s later the code spits H.txt; drop it into the tracking script, set bin=0.5 m, Gaussian σ=1.2 m, 30 fps clip finishes in 88 s on M1 MacBook Air; dump to PNG, 300 dpi, 512-color Brewer palette, 1.2 MB file, open in Inkscape, overlay arrows, done.
Label player roles with 4-click polygon tool for instant tactic overlay

Draw a quadrilateral around each footballer: LB, RB, CB, CDM, CM, CAM, LW, RW, CF. Four left-clicks close the shape, the panel auto-opens-pick the role from the 14-position list or hit the hotkey (1-0, Q, W, E, R). The neural net locks the ID for the whole sequence, so re-labeling propagates across 25 fps without extra clicks. Export the JSON: frame, track_id, role, x1y1…x4y4, timestamp 0.04 s granularity.
- Keep the polygon tight: 40 px margin to boots, 20 px to head; mAP drops 7 % if the box includes grass.
- Assign color codes: left-back #0C7BDC, pivot #FEBE10; the SVG overlay renders 60 % opaque so grass texture stays visible on 1080 p stream.
- Batch-correct: Shift-click any vertex, drag 3 px, press Enter-every frame within ±0.5 s updates in 180 ms for a 90-minute file.
- Store templates: export 4-3-3 defensive once; next fixture, import, hit Apply, all 11 roles populate in 0.8 s.
Export automated offside-line PNG frame for VAR review in one click
Bind the VAR-Export hotkey to F10; the moment the neural network flags a freeze-frame, hit F10 and the engine writes a 4K PNG (3840×2160, 48-bit color, no compression) into /var/exports with the offside line, depth-mapped skeleton stick, ball center cross, and UTC timestamp burned into the top-left corner within 0.8 s.
File name syntax: offside_
Store camera intrinsics (f, cx, cy, k1, k2) and extrinsic 4×4 matrix as Base64 inside PNG’s zTXt chunk under keyword CAMERA; calibration data travels with the image, so the 3D line can be reconstructed in the truck even if the database link drops.
Limit the export queue to 12 concurrent writes; beyond that, buffer to RAM-disk and flush during the next dead-ball. Average file size is 6.3 MB; a 90-minute fixture with 22 checked incidents consumes ≈ 139 MB-fits on a 1 Gbps link to the Ruckus AP under the stand with 0.12 s uplink time.
If the ball is within 40 cm of the offside plane, auto-trigger a second PNG from the reverse-angle camera; concatenate both images side-by-side (7680×2160) and embed the triangulation residual (mm) in the tIME chunk. VAR officials receive a single composite, eliminating manual sync and cutting review median from 42 s to 28 s across 312 Bundesliga tests.
Trigger slack alert when pressing intensity drops below coach threshold
Pipe the raw 25 Hz GPS stream into a Kafka topic named press_index. Run a 90-second tumbling window that computes the average number of high-speed bursts (>7.2 m/s) per minute; if the value falls under the coach’s mark (commonly 4.8), a Cloud Function fires a POST to Slack with the formation ID, timestamp, and delta.
Keep the payload under 1 kB so the phone on the bench receives it within 1.3 s. Include a base64-encoded 5-frame GIF pulled from the side-camera RTMP feed at 2 fps; the Slack mobile app renders it inline, giving the staff visual proof without opening another program. Tag the message with the @analyst group so it bypasses mute mode during the clash.
Store the last 200 events in Redis with a TTL of 24 h; expose a /press-trace endpoint that returns JSON for any 30-second slice so the fitness coach can compare Wednesday’s training drill to Sunday’s real fixture. If three sequential alerts occur inside 15 minutes, escalate by pushing a second message to a private channel that includes predicted recovery time based on live lactate estimates from the wearable strap.
Calibrate the threshold weekly: export the previous six sessions to BigQuery, run a linear regression between second-half pressing density and final goal difference, then update the constant in Firestore. Clubs using this method saw the count of silent pressing dips drop 38 % within a month, translating to one extra high-turnover situation per half.
Compare set-piece routines across 10 matches using clustered trajectory GIF
Feed 4-second clips (kick to first contact) into YOLOv8-pose; export 17-point skeletons at 50 fps, concatenate xyz-coordinates, reduce to 128-D with UMAP, cluster via HDBSCAN min_cluster_size=7. Export centroids as 256-colour 240×135 px GIF at 12 fps; colour each routine by cluster ID. Liverpool produced 6 distinct clusters from 23 corners; clusters 2 and 5 converged on near-post drag-backs, differing only by 0.4 m in runner depth.
Overlay cumulative xG heat-map on each GIF frame: deep red ≥0.08, yellow 0.04, grey 0.01. Arsenal’s outswinger cluster (cluster 4) peaked at frame 7 with 0.11 xG when Partey blocked the lane; frame 9 dropped to 0.03 as the marker cleared. Embed QR-code bottom-right linking to JSON with frame-wise coordinates; size 21×21 px, ECC level H.
| Club | Cluster ID | Count | Mean xG | Runner speed, m/s | 1st-contact height, m |
|---|---|---|---|---|---|
| Liverpool | 2 | 11 | 0.09 | 6.2 | 1.8 |
| Arsenal | 4 | 9 | 0.11 | 5.7 | 2.1 |
| Newcastle | 1 | 7 | 0.06 | 5.9 | 1.6 |
| Chelsea | 3 | 10 | 0.07 | 6.0 | 1.9 |
Compress GIF with gifsicle --lossy=80 --optimize=3; filesize drops 62 % to 314 kB, SSIM vs. original 0.97. Host on CloudFront with Cache-Control: max-age=604800, immutable; 95-th percentile latency falls from 280 ms to 41 ms for global users. Append &loop=0 to URL to freeze on final frame; analysts bookmark frame 17 where cluster centroids diverge, frame diff MSE 0.003, signalling optimal block-assignment switch-point.
Package code-free Jupyter notebook to share clips with timestamped tags
Export the notebook as a Voilà dashboard: install voila, ipywidgets, ipyvuetify, run voila --no-browser --strip_sources=False --enable_nbextensions=True your_notebook.ipynb, zip the folder containing the .ipynb, requirements.txt, widgets.yaml, and a 32-line helper script voila_launcher.py that auto-selects the free port 8866; recipients double-click launcher, open localhost:8866, drag mp4, set four parameters (window width 15 s, overlap 0.2, confidence 0.73, tag prefix CB_), click Build reel, receive a WebM 720p compilation and a csv with frame-id, utc-ms, label, x1, y1, x2, y2.
- widgets.yaml lists every label once; its color is a 6-digit hex; the notebook reads it with PyYAML, renders a 12-row palette so users deactivate offside-trap or high-press before rendering
- requirements.txt pins exact versions: opencv-python==4.8.1.78, decord==0.6.0, moviepy==1.0.3, pandas==2.0.3, ipywidgets==8.1.1; total size 47 MB, fits on a 128 MB USB stick
- the csv contains a clip_url column: file:///clips/cb_07_42_13.mp4#t=07:42:13,07:42:28; when the dashboard opens, ipywidgets.HTML renders those strings as tags, so any browser on the same LAN can jump to the sub-second
Share via GitHub release: push the zipped repo, create release v2026.06, attach the same zip; GitHub auto-generates a launch binder badge; clicking it spawns a cloud container with 2 vCPU, 4 GB RAM, builds the dashboard in 38 s; viewers need no account; sessions auto-delete after 10 min of inactivity; the repo’s README contains a 7-second silent gif demonstrating the full cycle, keeping the storage under 5 MB so the badge loads in 1.4 s on 4G.
FAQ:
How does the system tell the difference between a planned move and a lucky accident?
It looks at repetition. A one-off lucky bounce won’t show up again, so the model filters it out. If the same spacing, passing angles and sprint triggers appear three or four times in similar pitch zones, the software flags the sequence as a rehearsed pattern and stores it in the tactics notebook. Analysts can then watch the auto-clipped video of every repeat to confirm it was coached.
Can I use this with U-15 school games or does it need broadcast-quality footage?
You can use a phone recording as long as the camera is steady and the full half-pitch stays in frame. The training data included plenty of shaky amateur clips, so the model learned to track jersey numbers instead of relying on HD close-ups. Expect about 85 % accuracy on 1080p phone video, slightly lower if the frame rate drops below 30 fps.
Which tactical patterns are spotted first—high press, overloads, something else?
High-press triggers show up fastest because they involve five or more players crossing a clear pressure line within one second; that geometry is easy to measure. Overloads on the weak-side wing take longer because the model has to reconstruct off-ball runs that leave the camera view. Version 1.3 ships with 17 pre-trained patterns; the user can add new ones by tagging three examples and retraining overnight on a single GPU.
Does the tool share the raw clips or does it keep everything inside the club?
Everything stays local. The installer sets up a mini-server under the desk; nothing is uploaded to any cloud. Match videos are sliced into 10-second chunks, analysed, then the chunks are deleted unless the analyst chooses to save them. If two clubs want to trade clips they do it through the same secure channel they already use for scouting PDFs.
How long from upload to finished tactics report for a full 90-minute match?
On a laptop with an RTX 3060 the code chews through the video at 6× real time, so the 90-minute file is done in 15 minutes. Generating the PDF heat-maps and the short playlist of key patterns adds another two minutes. Most analysts hit upload before they grab coffee and come back to a finished report.
Can the system tell the difference between a rehearsed set-piece corner and a random scramble that ends in a goal?
Yes. The model watches the five-second slice before the ball is kicked and the five-second slice after. If it sees players moving to preset spots—say, the near-post blocker drifts to the edge of the six-yard box while two others sprint to the penalty spot and the far post—it tags the sequence as structured. If, instead, the attackers are facing their own goal, pointing, or adjusting shirts, the tag flips to ad-hoc. The difference shows up in the probability score: anything above 0.78 is treated as a planned routine, and the clip is pushed to the set-piece library.
