Skip to Content
DocumentationFirst-Party AppsπŸ₯½ Spatial Scan

πŸ₯½ Spatial Scan

Digitize your reality. Leverage on-device AI to map and construct rich 3D models of your physical environments, stored securely on your local node.

Overview

Spatial Scan is a room and environment scanning tool that runs 3D reconstruction models on your Hub (or on a connected edge device) to create detailed 3D models of physical spaces from video or depth sensor input. The resulting models are stored locally and integrated with Digital Memory, making your physical spaces searchable, navigable, and usable as context for your Companion Agents.

Scans can be viewed in the browser, within the Spatial Time Machine, or exported as standard formats (.glb, .obj, .ply).

Key Features

  • Video-based 3D reconstruction β€” create 3D meshes from standard RGB video (phone camera, webcam)
  • Depth sensor support β€” enhanced accuracy with LiDAR (iPhone Pro, iPad Pro, Intel RealSense)
  • Local AI processing β€” NeRF and Gaussian Splatting reconstruction runs on your Hub GPU
  • Semantic labeling β€” AI automatically labels objects and rooms in each scan
  • Digital Memory integration β€” scans are tagged with location, date, and semantic labels for recall
  • Change detection β€” compare scans of the same space over time to detect changes
  • Export β€” download models as .glb, .ply, or Gaussian Splat format
  • Privacy β€” scans never leave your local network

Use Cases

  • Create a 3D record of your home for insurance documentation
  • Document room layouts before a renovation for comparison afterward
  • Build an inventory of your physical possessions linked to Digital Memory
  • Provide spatial context to your Companion Agent (β€œshow me the living room as it looked in January”)
  • Feed scan data into Earth for spatial memory enrichment

Reconstruction Methods

MethodHardware RequiredQualitySpeed
NeRFGPU (6GB+ VRAM)HighSlow
Gaussian SplattingGPU (8GB+ VRAM)Very HighMedium
PhotogrammetryCPU (no GPU)MediumSlow
Depth + RGB fusionLiDAR + RGBHighFast

Setup

Install from Hub

Search for Spatial Scan in the Hub app store and install.

Verify GPU availability

Spatial Scan requires a GPU for NeRF and Gaussian Splatting. Open http://spatial-scan.ci.localhost β†’ System to confirm GPU detection.

Capture scan footage

For best results:

  • Walk slowly around the subject (45–90 seconds of video)
  • Maintain even lighting (avoid direct sunlight or harsh shadows)
  • Film from multiple heights
  • For LiDAR scans, use the dedicated iOS/Android capture app

Upload and reconstruct

Drag your video file(s) into the Spatial Scan interface and click Reconstruct. Select a method based on your hardware and desired quality.

Usage

Viewing a Completed Scan

Click any completed scan in the library to open the 3D viewer. Use mouse/trackpad to orbit, pan, and zoom. Toggle semantic labels with L.

Browsing in the Timeline

Scans appear in the Digital Memory timeline under the Spatial source. Each scan is tagged with location and date.

Comparing Two Scans

Select two scans of the same space and click Compare. Changed regions are highlighted in the overlay view.

Exporting

Click any scan β†’ Export β†’ choose format (.glb for web, .ply for point cloud tools, .splat for Gaussian Splatting viewers).

Troubleshooting

Reconstruction fails with OOM error Reduce the resolution setting before processing. For NeRF, use --scale 0.5. Consider using Photogrammetry mode which runs on CPU.

Reconstructed model has holes Insufficient coverage from the source video. Capture additional angles of problem areas and re-process with the supplemental footage merged in.

Scan not appearing in Digital Memory Ensure the Digital Memory connector is enabled in Settings β†’ Integrations. Check the connector log for API errors.

Last updated on