Cloudinary's Great — Until It Isn't
Cloudinary solves real image optimization pain. But I've watched the bill quietly double on two client accounts, and that changes the calculus.
Cloudinary's Great — Until It Isn't
Cloudinary does what it promises. You hand it an image URL, append a transformation string, and you get back a properly sized, format-converted, CDN-cached file in milliseconds. That's genuinely useful. What's less useful is opening an invoice and realizing you're paying $249/month for a real estate site that uploads floor plan photos.
What Problem It Actually Solves
The honest version: serving images correctly is boring, tedious, and surprisingly easy to get wrong.
You've got a client uploading 4MB iPhone photos to a property listing. That image needs to be a 400px thumbnail on the search results page, a 1200px hero on the detail page, and a 60px avatar somewhere in a sidebar widget. It needs to be WebP for Chrome, JPEG fallback for older browsers, and it needs to load fast enough that Google's Core Web Vitals don't tank the site's SEO.
Doing all of that yourself means: resize on upload (but you don't know all the sizes yet), or resize on first request and cache (now you're managing a file cache), or use <picture> elements with srcset and generate every variant ahead of time (now you're running queue jobs for every upload).
Cloudinary skips all of that. You store the original once and construct a URL with transformation parameters. The CDN handles caching. It just works.
For a new project under time pressure, that trade-off is often correct.
A Working Integration
Here's how I actually wire this up in a Laravel app. I use the official cloudinary-labs/cloudinary-laravel package but honestly I often just build a thin wrapper myself — the package does more than I need.
<?php
namespace App\Support;
class CloudinaryUrl
{
private string $cloudName;
private string $baseUrl = 'https://res.cloudinary.com';
public function __construct()
{
$this->cloudName = config('services.cloudinary.cloud_name');
}
public function transform(
string $publicId,
int $width = 0,
int $height = 0,
string $crop = 'fill',
string $format = 'auto',
int $quality = 80
): string {
$transforms = [];
if ($width) $transforms[] = "w_{$width}";
if ($height) $transforms[] = "h_{$height}";
$transforms[] = "c_{$crop}";
$transforms[] = "f_{$format}";
$transforms[] = "q_{$quality}";
$t = implode(',', $transforms);
return "{$this->baseUrl}/{$this->cloudName}/image/upload/{$t}/{$publicId}";
}
}
Then in a Blade view:
@php
$cdn = app(\App\Support\CloudinaryUrl::class);
@endphp
<img
src="{{ $cdn->transform($listing->photo_id, 1200, 630) }}"
srcset="
{{ $cdn->transform($listing->photo_id, 600, 315) }} 600w,
{{ $cdn->transform($listing->photo_id, 1200, 630) }} 1200w
"
sizes="(max-width: 640px) 600px, 1200px"
alt="{{ $listing->address }}"
loading="lazy"
/>
f_auto is the flag I care about most — Cloudinary detects the Accept header and serves WebP or AVIF when supported, JPEG otherwise. That alone can cut image payload by 30-50% without you doing anything.
For uploads, I push from the server rather than using their upload widget. Keeps the credentials server-side and lets me store just the public_id in the database:
use CloudinaryLabs\CloudinaryLaravel\Facades\Cloudinary;
$result = Cloudinary::upload($request->file('photo')->getRealPath(), [
'folder' => 'listings',
'use_filename' => true,
'unique_filename' => true,
]);
$listing->update([
'photo_id' => $result->getPublicId(),
]);
Store the public ID, not the full URL. You'll want to reconstruct the URL with different transformations later, and if you ever migrate off Cloudinary, you're not doing a find-and-replace across millions of rows.
The Gotchas
Transformation URL caching is forever, which is mostly good until it isn't.
Once Cloudinary generates and caches a transformed variant, it stays cached. If you need to change a transformation — say the designer decides crop gravity should be auto instead of center — you either invalidate the cache (costs API credits, rate-limited) or you change the transformation string (new URL, new cache miss). Plan for this. I name my transformation presets in a config file so they're easy to version.
Eager transformations vs. lazy generation.
By default, Cloudinary generates the transformed version the first time someone requests it. First visitor gets a slow response. For high-traffic sites that's fine. For a healthcare client where a specific image might only be viewed a handful of times, that first-load latency showed up in support tickets. You can tell Cloudinary to pre-generate on upload with eager transforms, but that adds complexity and costs more against your transformation quota.
The quota math is non-obvious.
Cloudinary prices on "credits" — a unit that covers storage, transformations, and bandwidth in ways that aren't immediately intuitive. A site I inherited was burning through credits on a carousel that loaded 8 variants of each hero image on every page load, because someone hadn't set cache headers correctly and the CDN wasn't caching. The fix was a one-line Nginx config, but the damage was already in the invoice. Audit your cache hit rates in the Cloudinary dashboard before you assume it's working correctly.
Signed URLs aren't the default.
Anyone can construct a Cloudinary URL with arbitrary transformations against your cloud name. For most use cases that's fine — the original file is still private if you set it that way. But if you're serving anything sensitive or you're worried about transformation abuse (someone hammering your cloud name with resize requests), you want signed URLs. Add that overhead to the integration.
// Signed URL generation
$params = [
'public_id' => $publicId,
'width' => 800,
'crop' => 'scale',
'sign_url' => true,
'api_secret' => config('services.cloudinary.api_secret'),
];
Eager transforms for AI features add up fast.
Cloudinary has pushed a lot of AI-powered stuff — background removal, auto-cropping on faces, generative fill. These cost significantly more per transformation. I've seen clients enable them experimentally and then forget to disable them. Check your credit breakdown monthly if you're using any of the ML features.
When I'd Reach for It
Cloudinary earns its keep in a few specific situations:
A new project where image handling isn't the core product. I'm not going to spend a sprint building a self-hosted transformation pipeline for a client's e-commerce MVP. Cloudinary is the right call. Revisit in 12-18 months when you know your traffic patterns.
Projects with genuinely unpredictable image dimensions. Real estate, e-commerce, marketplaces — users upload anything, in any aspect ratio, at any file size. Cloudinary's art direction tools (gravity, padding, background fill) handle the edge cases better than anything I've built myself.
When you need format negotiation without touching your CDN config. The f_auto trick works immediately, out of the box, across all your images. If you're running on shared hosting or a managed platform where you can't control response headers, this alone justifies the cost.
When I'd Bring It In-House
Two triggers make me start the migration conversation:
First, the bill exceeds what a VPS and some open-source tooling would cost. For most of my clients that number is somewhere around $100-150/month. At that point, I start pricing out a small Hetzner box running Imgproxy in front of an S3-compatible bucket. Imgproxy does on-the-fly resizing and format conversion, it's fast, and the operational overhead is low. I've migrated two clients to this setup and cut their image-serving costs by 70-80%.
Second, data residency or compliance requirements. I work with healthcare and biotech clients. If patient images or lab specimen photos are going through Cloudinary's infrastructure, we need to have a conversation about BAAs and where data lives. Cloudinary offers enterprise compliance options, but at that point you're in contract territory and the cost has already jumped. Self-hosted is often cleaner.
The migration isn't painful if you stored public IDs instead of full URLs (see above). You write a new URL builder that points at your Imgproxy instance, update the config value, done. The images are still in your S3 bucket.
The Bottom Line
Cloudinary is a legitimately good product. I've used it on probably a dozen projects and I'd use it again tomorrow for the right situation. The mistake I see — including on projects I've inherited — is treating it as a permanent infrastructure decision rather than a time-to-market shortcut. Know what you're paying for, audit the credit usage quarterly, and have a clear trigger for when you'll revisit. That's not a knock on Cloudinary. That's just how you run infrastructure for clients who are watching their margins.
Need help shipping something like this? Get in touch.