AI SummaryReal-time monocular depth estimation using Depth Anything v2. Transforms camera feeds with colorized depth maps — near objects appear warm, far objects appear cool. When used for privacy mode, the blend mode fully anonymizes the scene while preserving spatial layout and activity, enabling security
Install
Copy this and paste it into Claude Code, Cursor, or any AI assistant:
I want to install the "depth-estimation" skill in my project. Please run this command in my terminal: # Install skill into your project mkdir -p .claude/skills/DeepCamera && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/DeepCamera/SKILL.md "https://raw.githubusercontent.com/SharpAI/DeepCamera/master/SKILL.md" Then restart Claude Code (or reload the window in Cursor) so the skill is picked up.
Description
Real-time depth map privacy transforms using Depth Anything v2 (CoreML + PyTorch)
Depth Estimation (Privacy)
Real-time monocular depth estimation using Depth Anything v2. Transforms camera feeds with colorized depth maps — near objects appear warm, far objects appear cool. When used for privacy mode, the depth_only blend mode fully anonymizes the scene while preserving spatial layout and activity, enabling security monitoring without revealing identities.
Hardware Backends
| Platform | Backend | Runtime | Model | |----------|---------|---------|-------| | macOS | CoreML | Apple Neural Engine | apple/coreml-depth-anything-v2-small (.mlpackage) | | Linux/Windows | PyTorch | CUDA / CPU | depth-anything/Depth-Anything-V2-Small (.pth) | On macOS, CoreML runs on the Neural Engine, leaving the GPU free for other tasks. The model is auto-downloaded from HuggingFace and stored at ~/.aegis-ai/models/feature-extraction/.
What You Get
• Privacy anonymization — depth-only mode hides all visual identity • Depth overlays on live camera feeds • 3D scene understanding — spatial layout of the scene • CoreML acceleration — Neural Engine on Apple Silicon (3-5x faster than MPS)
Interface: TransformSkillBase
This skill implements the TransformSkillBase interface. Any new privacy skill can be created by subclassing TransformSkillBase and implementing two methods: `python from transform_base import TransformSkillBase class MyPrivacySkill(TransformSkillBase): def load_model(self, config): # Load your model, return {"model": "...", "device": "..."} ... def transform_frame(self, image, metadata): # Transform BGR image, return BGR image ... `
Discussion
Health Signals
My Fox Den
Community Rating
Sign in to rate this booster