By LK Wood IV · 2026-05-09 · ~12 min read · St. Louis County, MO

You want off Google Photos. You have a mini-PC or a NAS, 16 to 32 GB of RAM, and a few terabytes of family photos. Three projects are honest answers in 2026: Immich, PhotoPrism, and Ente Photos running on your own server. They are not interchangeable. They make different bets about who runs the AI, what you pay for, and how your photos look on a phone.

I picked one for the family library on my own homelab in May 2026. This walkthrough explains the call, who the other two are right for, and the specific failure modes you can avoid by choosing on purpose.

TL;DR

Decision factorImmichPhotoPrism CEEnte self-hosted
Latest stable (May 2026)v2.7.5, Apr 13build 260305-fad9d5395, Mar 5photos-v1.3.40 mobile, May 8
GitHub stars~98,000~38,700not separately tracked (monorepo)
Face recognition included freeYesNo (Plus, €6+/mo)Yes (on-device)
Server RAM (idle)~900 MB to 1.6 GBLighter Go binary~130 to 500 MB
Encryption at restApp-level, server keysApp-level, server keysTrue end-to-end, on-device keys
Native iOS + Android appsYes (Flutter)PWA only (3rd-party native)Yes (rapid release cadence)
S3 / object storage nativeNoLimitedYes (MinIO and any S3)
Hardware acceleration (ML)CUDA, OpenVINO, ROCm, ARM NN, RKNNNone for ML (Plus tier transcoding only)Client-side; not applicable
LicenseMITAGPL CE / paid PlusAGPLv3
Best forMost homelabbersSelf-hosters who paid for Plus or want the lightest Go binaryPrivacy-first; people who already trust E2E

What I picked, in one sentence

I picked Immich for the family library because face recognition, semantic search, and the iOS app all work for free, and the project ships every month.

If you have a 2010-era CPU you want to keep using, an existing PhotoPrism Plus subscription, or you genuinely want end-to-end encryption with on-device ML, the right answer changes. I’ll walk through each.

How I evaluated them

Three buckets:

  1. What the project ships in May 2026 — versions, release cadence, and what is included free for self-hosters.
  2. What the install actually costs you — server RAM, CPU during ML indexing, disk overhead, mobile app pain.
  3. What happens when something goes wrong — backup story, migration off the platform, encryption posture, and project-health risk.

Numbers below come from each project’s own documentation, GitHub release pages, the official blogs, and a small set of recent community benchmark posts. Citations are inline.

Immich

Status, May 2026. Latest stable is v2.7.5 (release notes), shipped April 13, 2026 on top of v2.7.0 (April 7). The project hit ~90,000 GitHub stars in late January 2026 (PixelUnion) and reached approximately 98,000 by mid-April 2026 (Elestio comparison). The team publishes a monthly recap and ships one minor release per month. The 2025 year-in-review reports 8,800+ commits and ~1,700 contributors over the year.

v3.0.0 is coming. The April 2026 recap confirms v3.0.0 is in active preparation. Notable breaking changes already merged include the replaceAsset endpoint removal, old mobile timeline and sync endpoints removal, album.owner relocating to album.users, and asset.duration switching from a string like "00:00:05" to integer milliseconds. The ML service on amd64 will require x86-64-v2 microarchitecture in v3.0, which excludes pre-2010 processors. Plan to upgrade your container hosts before v3 lands.

The ML stack runs on your server. Three pipelines, all in the immich-machine-learning container (AI features guide):

  • Face detection and recognition uses InsightFace.
  • Smart Search (semantic) uses OpenAI CLIP. ViT-B/32 by default; ViT-L/14 available. Embeddings store in PostgreSQL via VectorChord.
  • Object and scene classification runs as background jobs.

VectorChord matters: Immich migrated off the deprecated pgvecto.rs extension. New self-hosters should start on VectorChord; existing installs need a migration. (Postgres standalone docs)

Hardware acceleration covers more options than the others. Per the hardware-accelerated ML docs, Immich supports CUDA (NVIDIA, compute capability ≥ 5.2, driver ≥ 545, CUDA 12.3), OpenVINO (Intel Iris Xe, Arc, integrated), ROCm (AMD GPUs; first inferences are slow because models compile at runtime), ARM NN (Arm Mali; FP32 by default), and RKNN (Rockchip RK3566/68/76/88 with NPU). Video transcoding adds NVENC, Quick Sync, RKMPP, and VAAPI (transcoding docs).

Real footprint. At idle, Immich uses ~900 MB to 1.6 GB of RAM. Upload processing on an AMD Ryzen 5 PRO 2400GE peaks at ~25% CPU and 1.6 GB of RAM (KittMedia comparison). Plan for ~1.15× the library size in extra disk for thumbnails (Glukhov self-hosting guide).

Initial ML indexing is slow on weak CPUs. A NAS-grade Intel Celeron N5095 hits 500 to 1,000 photos per hour. An Intel Core i5/i7 desktop hits 3,000 to 5,000 per hour. ARM Cortex-A55 budget NAS is closer to 50 to 100 per hour, which puts a 10,000-photo library at 4 to 8 days for first indexing (same source). One Reddit report of a 200,000-image transfer on an Intel i7-4790K with no GPU took ~3 days for ML processing plus another week for OCR. Library scanning itself is much faster after the v1.130 rewrite — a 19,000-asset library now scans in 9 seconds versus 1 minute 40 seconds previously, and 5-million-asset libraries scan in under 7 minutes (Linuxiac).

Storage. Immich does not natively support S3 as of May 2026. FUSE-mounted S3 (s3fs, s3ql) is a community workaround with documented latency tradeoffs (gvolpe). The database is PostgreSQL 14–18 with VectorChord; Redis (Valkey) handles background queues via BullMQ (architecture docs). If you want object storage, Ente is a better starting point.

Mobile. Native Flutter apps for iOS and Android. Google Play rates 5.0 (~6,470 reviews) and the App Store rates 4.8 (669 ratings) (Play, App Store). Background backup, timeline browsing, map view, face browsing, non-destructive editing, home-screen widgets, and OCR all work today. Full mobile editing parity is on the v3.0 roadmap.

Pick Immich if you want everything included free, you have a 4-core or better x86 server with at least 8 GB of RAM, and you do not need true end-to-end encryption.

PhotoPrism

Status, May 2026. PhotoPrism uses date-stamped builds rather than semver. The latest stable is build 260305-fad9d5395, March 5, 2026 (GitHub releases). One production release has shipped in 2026, against Immich’s four. GitHub stars are ~38,700 (repo).

The licensing model is the catch. PhotoPrism Community Edition is AGPL and free, but key features sit behind a paid Plus License (editions page). Essentials runs €2/month and turns on most useful features. Plus is €6+/month and turns on face recognition, the admin UI, and deduplication. If face recognition is the entire reason you are leaving Google Photos, the math is straightforward: PhotoPrism CE alone will not give it to you.

The ML stack moved off TensorFlow 1. The April 2025 upgrade to TensorFlow v2.18.0 fixed years of stale dependencies, and the November 2025 release introduced a new CNN face detection engine (release notes). The March 2026 build added Ollama integration for caption generation, including a “thinking” response fallback for reasoning models (Pro release notes).

Hardware story is narrower than Immich. Hardware acceleration is for transcoding only — Intel Quick Sync, VAAPI, and NVIDIA NVENC (transcoding docs). ML inference does not get a GPU path. Recommended database is MariaDB; SQLite is testing-only (advanced database docs). RAW handling uses Darktable v5.0.1 and RawTherapee v5.11 (RAW docs) and requires the Essentials tier.

Mobile is the weak spot. PhotoPrism ships an official PWA (PWA docs) and lists third-party native apps (native apps docs) — Gallery for PhotoPrism, PhotoSync, and Stream. None match what Immich and Ente ship under their own brand. If you live on iOS, this is the friction point.

Pick PhotoPrism if you already paid for Plus or are willing to, you want a lighter Go-binary footprint, and PWA-on-the-phone is acceptable.

Ente Photos (self-hosted)

Status, May 2026. The mobile apps ship every two weeks or less. Latest is photos-v1.3.40, May 8, 2026 (GitHub releases). The self-hosted server (Museum) does not use a visible semver tag; you pull ghcr.io/ente-io/server with date or latest tags (self-hosting quickstart).

This is the only true E2E option. Server stores encrypted blobs only; all ML runs on-device using ONNX Runtime (ML architecture page). The ML models are MobileCLIP for semantic search, plus YOLO5Face and MobileFaceNet for face recognition. A Cure53 audit funded through CERN landed in October 2025 and a Rust crypto audit landed in April 2026 (blog). If you have a threat model where the server cannot be trusted, this is the only project of the three that handles it.

The server is small. Museum (Go) plus PostgreSQL plus MinIO or any S3-compatible storage. Idle RAM is ~130 to 500 MB (KittMedia). The cost shifts to the client device, which carries the CPU and battery hit during ML indexing — and to S3 storage, which you pay for separately if you do not run MinIO locally.

2026 has been a strong year on features. The February blog added likes, comments, and album admin roles. The March blog added an offline gallery mode, faster ML, vector DB integration for search, Memory Lane, QR code detection, and smart albums. App store ratings sit at 4.6 on Google Play and 4.7 on the App Store (Play, App Store).

The cost of E2E is real. Server-side migration tools are limited because the server cannot read your photos. Bulk import from a Google Takeout uses the desktop app and your laptop, not the server. Face recognition for young children who resemble each other is a known soft spot (r/enteio April 2026). And no server-side ML upgrade path is possible by design — the model improves only when the client app does.

Pick Ente if end-to-end encryption is non-negotiable, you want native S3 storage, or you prefer the smallest server footprint and the fastest mobile release cadence.

Resource and license summary

PlatformIdle RAMLicenseSelf-hosting costNative S3
Immich~900 MB to 1.6 GBMITFree, no feature gatingNo
PhotoPrism CELight Go binaryAGPLFree, missing featuresLimited
PhotoPrism PlusLight Go binaryPlus License€6+/month for face recognitionLimited
Ente self-hosted~130 to 500 MBAGPLv3Free server; S3 storage cost separateYes

Backup, restore, and lock-in

A self-hosted photo library is only as safe as the backup you actually run. The three projects answer this differently.

Immich stores photos in a flat filesystem under your configured UPLOAD_LOCATION. Originals are never modified, so a rsync to a separate disk or to off-site object storage covers them. The metadata, faces, ML embeddings, albums, and shared link state all sit in PostgreSQL, so a nightly pg_dump is mandatory. The Admin UI added in v2.5.0 schedules database backups for you (Backup and restore docs). The community tool immich-go adds bulk export back to a folder structure when you want to walk away.

PhotoPrism uses MariaDB for the index database; the photos themselves live in originals/. Metadata can be exported to sidecar XMP and JSON files so a future migration is not entirely locked into the PhotoPrism schema (metadata exports docs).

Ente is the most awkward to back up because the server cannot decrypt anything. Originals are encrypted blobs in MinIO or your S3 bucket, and the only authoritative export path is the Ente desktop app or the official CLI signed in as your account. That is by design — and it is the cost of true E2E. The export documentation walks through the process. If you want to abandon Ente later, plan that migration on the client side from a healthy desktop.

Whichever you pick, the rule is the same: back up the originals filesystem and the database to a different physical disk and a different building. The 3-2-1 rule applies to family photos as much as to anything else.

Migration off Google Photos

All three accept Google Takeout exports. Immich and PhotoPrism import on the server side. Ente uses the desktop app to import on the client because the server cannot decrypt anything. Plan disk space accordingly: Takeout exports a 200 GB library as 200 GB of zips, you unzip, and then you import — at least 600 GB free during the migration window.

If your library lives on a separate NAS or storage VM, this is a good moment to revisit storage layout. The Proxmox + TrueNAS + Unraid post on this site walks through which backend handles the photo-library access pattern best — sequential writes during ingest, then bursty random reads at thumbnail-rendering time. See Proxmox vs TrueNAS vs Unraid storage backends 2026 for the IOPS math.

What I’d actually do on day one

  1. Spin up a mini-PC or VM with 4 cores and 16 GB of RAM. The best mini-PCs for homelab in 2026 shortlist works for all three projects.
  2. If you have an NVIDIA GPU available, point Immich at CUDA from day one. Initial indexing on a 100,000-photo library is the difference between a weekend and a week.
  3. Run a 5,000-photo trial import before pointing the family at it. Verify face recognition quality on real subjects, check the iOS background backup, and time-box yourself one weekend before committing.
  4. Back up the database and the originals separately. The originals are just a filesystem; copy them with rsync. The Postgres database has the ML embeddings, faces, and albums — it gets nightly pg_dump.

The “right” choice is the one whose tradeoffs you are willing to live with for five years. Immich is the safe pick for most homelabbers in May 2026. PhotoPrism is the right pick if you already pay for Plus. Ente is the right pick if you mean it about end-to-end encryption.