BLOG / TUTORIALS / Raspberry Pi Camera Projects: Security C…
Статья блога

Raspberry Pi Camera Projects: Security Cam, Time-Lapse, and Object Detection

Viktor Build ~12 min read

Three practical Raspberry Pi camera projects: motion-activated security camera, time-lapse video, and real-time object detection with TensorFlow Lite. Step-by-step setup.

Raspberry Pi Camera Projects: Security Cam, Time-Lapse, and Object Detection

So you’ve got a Raspberry Pi and a camera module, and you’re wondering what to actually build with it. The short answer: you can turn it into a motion-activated security camera, a nature time-lapse rig, or a real-time object detection system. This roundup covers all three projects, with wiring guides, working code, and the basic computer vision concepts you need to adapt them further.

The Pi Camera Module (both the standard v2 and the newer Module 3) connects directly to the CSI ribbon port on any Pi with a camera connector. No soldering, no complicated wiring — it’s the cleanest way to add vision to a project.


What You’ll Need (Common to All Projects)

Before diving into the individual builds, here’s the shared hardware and software setup.

Hardware

  • Raspberry Pi (3B+, 4B, or Zero 2 W recommended for camera projects)
  • Raspberry Pi Camera Module (v2 or Module 3)
  • 15-pin ribbon cable (usually included)
  • MicroSD card (16GB or larger, flashed with Raspberry Pi OS)
  • Power supply (5V/3A for Pi 4, 5V/2.5A for Pi 3B+)
  • Optional: Pi case with camera mount, heatsinks, jumper wires for PIR sensor

Initial Software Setup

  1. Enable the camera interface:
    sudo raspi-config
    
    Navigate to Interface OptionsCameraEnable.
  2. Update your system:
    sudo apt update && sudo apt upgrade -y
    
  3. Install core camera tools:
    sudo apt install python3-picamera2 python3-opencv -y
    
    picamera2 is the modern library for Pi cameras (replacing the deprecated picamera). OpenCV (cv2) handles all computer vision tasks.

Test your camera:

from picamera2 import Picamera2
picam2 = Picamera2()
picam2.start()
picam2.capture_file("test.jpg")
picam2.stop()

If you see a test.jpg image in your home directory, everything works.


Project 1: Motion-Activated Security Camera

This project turns your Pi into a basic security cam that records video clips only when motion is detected. No cloud subscription, no monthly fees — just local storage and optional email alerts.

How Motion Detection Works

The simplest approach uses frame differencing: compare two consecutive frames pixel-by-pixel. If enough pixels changed beyond a threshold, trigger a recording. OpenCV handles this in three lines.

Wiring (Optional PIR Sensor)

You can use the camera alone (software-based detection) or combine it with a PIR motion sensor for higher accuracy. For the PIR version:

PIR Sensor Raspberry Pi
VCC 5V (pin 2)
GND GND (pin 6)
OUT GPIO17 (pin 11)

Code: Motion-Triggered Recording

Save this as security_cam.py:

from picamera2 import Picamera2
import cv2
import numpy as np
import time
from datetime import datetime

picam2 = Picamera2()
picam2.configure(picam2.create_video_configuration(main={"size": (640, 480)}))
picam2.start()

# Motion detection variables
prev_frame = None
motion_threshold = 5000  # adjust based on lighting
recording = False
out = None

try:
    while True:
        frame = picam2.capture_array()
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        gray = cv2.GaussianBlur(gray, (21, 21), 0)

        if prev_frame is None:
            prev_frame = gray
            continue

        # Compute frame difference
        delta = cv2.absdiff(prev_frame, gray)
        thresh = cv2.threshold(delta, 25, 255, cv2.THRESH_BINARY)[1]
        thresh = cv2.dilate(thresh, None, iterations=2)
        contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        motion_detected = False
        for contour in contours:
            if cv2.contourArea(contour) > motion_threshold:
                motion_detected = True
                break

        if motion_detected and not recording:
            timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
            filename = f"motion_{timestamp}.avi"
            fourcc = cv2.VideoWriter_fourcc(*'XVID')
            out = cv2.VideoWriter(filename, fourcc, 20.0, (640, 480))
            recording = True
            print(f"Motion detected — recording {filename}")

        if recording:
            out.write(frame)
            # Stop recording after 10 seconds of no motion
            if not motion_detected:
                no_motion_start = time.time()
                while time.time() - no_motion_start < 10:
                    frame = picam2.capture_array()
                    out.write(frame)
                    # Check again for motion
                    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
                    gray = cv2.GaussianBlur(gray, (21, 21), 0)
                    delta = cv2.absdiff(prev_frame, gray)
                    thresh = cv2.threshold(delta, 25, 255, cv2.THRESH_BINARY)[1]
                    if cv2.countNonZero(thresh) > motion_threshold:
                        no_motion_start = time.time()
                recording = False
                out.release()
                print("Recording stopped")

        prev_frame = gray

except KeyboardInterrupt:
    if recording:
        out.release()
    picam2.stop()
    cv2.destroyAllWindows()

Run it with python3 security_cam.py. It saves .avi clips to your current directory. You can extend this with email notifications or upload to cloud storage using smtplib or boto3.

Potential Pitfalls

  • False triggers: Adjust motion_threshold (line 13). Higher values reduce sensitivity.
  • Low light: Add an IR-cut filter or use the Pi Camera Module 3’s infrared capability.

Project 2: Time-Lapse Camera for Nature or Construction

Time-lapse is one of the most satisfying Pi camera projects. Set it up outside a window or on a balcony, and it captures a frame every N seconds, then stitches them into a video.

How Time-Lapse Works

The Pi captures frames at a fixed interval (e.g., 1 frame every 30 seconds). After collecting hundreds or thousands of images, you combine them into a video at 24-30 fps to compress hours into minutes.

Parts List

  • Raspberry Pi + Camera Module
  • Weatherproof enclosure (optional, for outdoor use)
  • USB power bank (for untethered operation)

Code: Time-Lapse with Automatic Video Stitching

Save as timelapse.py:

from picamera2 import Picamera2
import time
import os
from datetime import datetime

# Configuration
INTERVAL = 30  # seconds between shots
TOTAL_DURATION = 3600  # total capture time in seconds (1 hour here)
OUTPUT_DIR = "timelapse_images"
VIDEO_FPS = 24

os.makedirs(OUTPUT_DIR, exist_ok=True)

picam2 = Picamera2()
picam2.configure(picam2.create_still_configuration(main={"size": (1920, 1080)}))
picam2.start()

start_time = time.time()
photo_count = 0

print(f"Capturing time-lapse: {INTERVAL}s interval, {TOTAL_DURATION}s duration")

while time.time() - start_time < TOTAL_DURATION:
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"{OUTPUT_DIR}/img_{timestamp}.jpg"
    picam2.capture_file(filename)
    photo_count += 1
    print(f"Captured {filename}")
    time.sleep(INTERVAL)

picam2.stop()
print(f"Done. Captured {photo_count} images.")

# Stitch into video using ffmpeg
os.system(f"ffmpeg -framerate {VIDEO_FPS} -pattern_type glob -i '{OUTPUT_DIR}/*.jpg' -c:v libx264 -pix_fmt yuv420p timelapse_{datetime.now().strftime('%Y%m%d')}.mp4")
print("Video created: timelapse.mp4")

Install ffmpeg if needed: sudo apt install ffmpeg.

Customization Ideas

  • Day-only capture: Check the average pixel brightness before saving; skip dark frames.
  • Sunrise/sunset trigger: Use the ephem library to start/stop based on solar elevation.
  • Battery optimization: Use a Raspberry Pi Zero 2 W and a 10,000 mAh power bank — you’ll get 8+ hours.

Project 3: Real-Time Object Detection

This is the headline project: a Pi running a lightweight neural network that identifies objects (people, cars, animals) in real time. Thanks to TensorFlow Lite and MobileNet SSD, even a Pi 4 can manage 2-5 frames per second.

How Object Detection Works

A pre-trained model (MobileNet SSD) has learned to recognize 90 common objects. The Pi feeds camera frames into the model, which returns bounding boxes and labels. We then draw those on the video stream.

Installation

Install the TensorFlow Lite runtime (optimized for ARM):

sudo apt install libatlas-base-dev
pip install tflite-runtime

Download the model files:

wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip

Code: Real-Time Object Detection Overlay

Save as object_detect.py:

from picamera2 import Picamera2
import cv2
import numpy as np
import tflite_runtime.interpreter as tflite

# Load TFLite model
interpreter = tflite.Interpreter(model_path="detect.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# COCO class labels (subset — full list has 90)
labels = ["person", "bicycle", "car", "motorcycle", "bus", "truck", "cat", "dog", "bird", "horse"]

picam2 = Picamera2()
picam2.configure(picam2.create_video_configuration(main={"size": (300, 300)}))
picam2.start()

try:
    while True:
        frame = picam2.capture_array()
        # Model expects uint8 RGB input
        input_data = np.expand_dims(frame, axis=0).astype(np.uint8)
        interpreter.set_tensor(input_details[0]['index'], input_data)
        interpreter.invoke()

        # Get detection results
        boxes = interpreter.get_tensor(output_details[0]['index'])[0]
        classes = interpreter.get_tensor(output_details[1]['index'])[0]
        scores = interpreter.get_tensor(output_details[2]['index'])[0]

        h, w, _ = frame.shape
        for i in range(len(scores)):
            if scores[i] > 0.5:  # confidence threshold
                ymin, xmin, ymax, xmax = boxes[i]
                x = int(xmin * w)
                y = int(ymin * h)
                x2 = int(xmax * w)
                y2 = int(ymax * h)
                label_id = int(classes[i])
                label = labels[label_id] if label_id < len(labels) else f"ID{label_id}"
                cv2.rectangle(frame, (x, y), (x2, y2), (0, 255, 0), 2)
                cv2.putText(frame, f"{label}: {scores[i]:.2f}", (x, y-10),
                            cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

        cv2.imshow("Object Detection", frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

except KeyboardInterrupt:
    pass

cv2.destroyAllWindows()
picam2.stop()

Run it: python3 object_detect.py. Point the camera at your room — it’ll detect you as a “person” with ~80-95% confidence. The Pi 4 handles about 3-4 FPS at 300x300 resolution. For better performance, use a Pi 5 or drop to 224x224.

Extending to Specific Objects

If you only care about people or cars, filter by label ID:

if label_id == 0:  # person
    # trigger alert or save clip

Combine with the security cam code from Project 1 to make a person-only recording system.


Choosing the Right Pi for Camera Projects

Project Minimum Pi Recommended Pi Resolution FPS
Security Cam Pi 3B+ Pi 4B (2GB+) 640x480 20-30
Time-Lapse Pi Zero 2 W Pi 3B+ 1920x1080 N/A (still)
Object Detection Pi 4B (4GB) Pi 5 (4GB+) 300x300 2-5

The Pi Zero 2 W works fine for time-lapse (it still captures 8 MP stills), but struggles with real-time video processing. For object detection, skip the Zero entirely — you need the quad-core Cortex-A72 of the Pi 4 or 5.


Conclusion

These three projects — security cam, time-lapse, and object detection — cover the most common use cases for the Raspberry Pi Camera Module. Start with the motion-triggered recorder for a practical home security upgrade. Graduate to the time-lapse rig for creative outdoor projects. Then level up to real-time object detection when you’re ready to dive into computer vision.

All the code works with Python 3 and the modern picamera2 library, and everything runs locally — no cloud dependencies. The full source for all three projects is available on GitHub for you to fork and modify.

Присоединяйся к сообществу в Discord

Задавай вопросы, делись своими сборками и общайся с другими мейкерами.

Присоединиться к Discord — бесплатно

Понравился туториал?

Поддержи канал на Patreon и получи ранний доступ к проектам и многому другому.

Поддержать на Patreon →