log in
consulting hosting industries the daily about contact

AWS S3 Pre-signed URLs: Getting Upload Security Right

Pre-signed S3 URLs look simple until you realize you've handed users a blank check. Here's how I actually lock them down.

AWS S3 Pre-signed URLs: Getting Upload Security Right

Pre-signed S3 URLs feel like magic the first time you use them. Your server mints a URL, hands it to the browser, the browser uploads directly to S3 — no file ever touches your application server. Then you discover what you didn't configure, and that magic turns into a liability.

What Problem This Actually Solves

The naive approach to file uploads runs the file through your web server: browser posts to your Laravel endpoint, PHP writes to disk or streams to S3, you return a response. That works fine until a user tries to upload a 400 MB DICOM file and your 30-second PHP timeout kills it. Or until you're running four app servers behind a load balancer and you have to figure out where the temp file landed. Or until your EC2 bandwidth bill arrives.

Pre-signed URLs sidestep all of that. Your server never sees the file bytes. AWS S3 accepts the upload directly from the client, authenticated by a time-limited signature you generated server-side. The signature is cryptographically tied to the bucket, key, expiry, and — crucially — the content type and size conditions you specify. If any of those don't match, S3 rejects the request.

That last part is where most implementations I've audited go wrong. They generate the URL and call it done. They haven't constrained what the client can actually upload.

A Working Implementation

I use the AWS SDK for PHP directly rather than going through Laravel's Flysystem abstraction for this, because Flysystem doesn't expose the pre-sign controls I need. Here's the service class I actually ship:

<?php

namespace App\Services;

use Aws\S3\S3Client;
use Aws\S3\PostObjectV4;
use Illuminate\Support\Str;

class S3UploadService
{
    private S3Client $client;
    private string $bucket;

    public function __construct()
    {
        $this->client = new S3Client([
            'version' => 'latest',
            'region'  => config('filesystems.disks.s3.region'),
            'credentials' => [
                'key'    => config('filesystems.disks.s3.key'),
                'secret' => config('filesystems.disks.s3.secret'),
            ],
        ]);

        $this->bucket = config('filesystems.disks.s3.bucket');
    }

    /**
     * Generate a pre-signed POST policy for direct browser uploads.
     * This is more secure than a pre-signed PUT URL because you can
     * enforce content-type and file size at the S3 level.
     */
    public function presignedUpload(
        string $folder,
        string $allowedMimeType,
        int $maxBytes,
        int $expiresInSeconds = 300
    ): array {
        $key = $folder . '/' . Str::uuid() . '-${filename}';

        $formInputs = [
            'acl'          => 'private',
            'Content-Type' => $allowedMimeType,
        ];

        $options = [
            ['acl'          => 'private'],
            ['bucket'       => $this->bucket],
            ['starts-with', '$key', $folder . '/'],
            ['starts-with', '$Content-Type', $allowedMimeType],
            ['content-length-range', 1, $maxBytes],
        ];

        $postObject = new PostObjectV4(
            $this->client,
            $this->bucket,
            $formInputs,
            $options,
            '+' . $expiresInSeconds . ' seconds'
        );

        return [
            'url'    => $postObject->getFormAttributes()['action'],
            'fields' => $postObject->getFormInputs(),
            'key'    => $key,
            'expires_at' => now()->addSeconds($expiresInSeconds)->toIso8601String(),
        ];
    }
}

And the controller endpoint that issues the credential:

<?php

namespace App\Http\Controllers;

use App\Services\S3UploadService;
use Illuminate\Http\Request;

class UploadTokenController extends Controller
{
    private const ALLOWED_TYPES = [
        'image'    => ['image/jpeg', 'image/png', 'image/webp'],
        'document' => ['application/pdf'],
        'dicom'    => ['application/dicom'],
    ];

    private const MAX_BYTES = [
        'image'    => 10_000_000,   // 10 MB
        'document' => 50_000_000,   // 50 MB
        'dicom'    => 500_000_000,  // 500 MB
    ];

    public function __construct(private S3UploadService $uploads) {}

    public function issue(Request $request, string $type): array
    {
        $request->validate([
            'content_type' => 'required|string',
        ]);

        abort_unless(
            isset(self::ALLOWED_TYPES[$type]),
            422,
            'Unknown upload type'
        );

        abort_unless(
            in_array($request->content_type, self::ALLOWED_TYPES[$type], true),
            422,
            'Content type not permitted for this upload type'
        );

        $folder = match($type) {
            'image'    => 'uploads/images/' . auth()->id(),
            'document' => 'uploads/documents/' . auth()->id(),
            'dicom'    => 'uploads/dicom/' . auth()->id(),
        };

        return $this->uploads->presignedUpload(
            folder:          $folder,
            allowedMimeType: $request->content_type,
            maxBytes:        self::MAX_BYTES[$type]
        );
    }
}

The client gets back a URL, a set of form fields, and an expiry. It does a standard multipart POST directly to S3. If the file is too big, the MIME type is wrong, or the signature is expired, S3 returns a 403 and my server was never involved.

The Gotchas That Will Bite You

PUT vs POST pre-signing is not the same thing. The AWS SDK has createPresignedRequest for PUT and PostObjectV4 for POST. A pre-signed PUT URL is one URL, one action, no policy document. A pre-signed POST is a URL plus a signed policy with conditions. For browser uploads you almost always want POST — it's the only way to enforce content-length-range and lock down the content type server-side. I've seen teams use PUT URLs thinking they were safe, then discover the client can upload any content type, any size, because the PUT signature only covers the URL structure.

CORS will ruin your Tuesday. S3 has its own CORS configuration, separate from anything CloudFront does in front of it. You have to explicitly add a CORS rule on the bucket allowing POST from your origin. I keep this in Terraform so it doesn't get lost:

cors_rule {
  allowed_headers = ["*"]
  allowed_methods = ["POST", "PUT"]
  allowed_origins = ["https://yourapp.com"]
  expose_headers  = ["ETag"]
  max_age_seconds = 3000
}

Missing expose_headers = ["ETag"] will haunt you when you try to do multipart completion — the browser can't read the ETag from the response and your completion call fails silently.

The key prefix matters for access control. I scope keys to uploads/{type}/{user_id}/ and then have a separate IAM policy that limits what the application role can read from those prefixes. The upload credential gets created server-side under an IAM role that can only PutObject to the uploads/ prefix, never GetObject. A different role handles serving files. Conflating these is how you end up with a bucket that's inadvertently world-readable.

Clock skew will get you in staging. Pre-signed URLs are validated against request time. If your dev machine or a container's clock drifts more than 15 minutes from AWS time, every request returns a 403 RequestTimeTooSkewed. Ran into this on a Docker setup at a healthcare client — the container's clock had drifted because NTP wasn't syncing through the NAT. Wasted two hours before I thought to check date inside the container.

Don't skip the post-upload verification step. The URL lets them upload. It doesn't mean the upload succeeded cleanly or that the file is what they claimed. After a successful upload, I fire a queued job that checks the object exists in S3, validates its size matches what the user reported, and for images runs it through a basic MIME sniff (not trusting the Content-Type header). For DICOM files at biotech clients I queue a more thorough validation step. Trust but verify.

When I'd Reach for This

Anytime the files are large, the users are on slow or unreliable connections, or you're running a horizontally scaled app and don't want to think about where temp files land. That covers most of the upload scenarios I deal with — patient records, lab data, product images for e-commerce, print assets that can be 200+ MB.

I'd also use this pattern even for smaller files if the volume is high. Running file upload bytes through your application tier is a real cost — both in compute and bandwidth. Pre-signed uploads push that cost onto S3 where it belongs.

When I wouldn't reach for this: uploads that need server-side processing before storage, where you actually want the bytes to hit your server first — a CSV import that needs to be validated and parsed, for example, or anything where you're immediately transforming the content. Pre-signing is for storage, not processing.

Also worth saying: if you're on a simpler project with small files and a single server, the added complexity of pre-signed URLs is real. Keep it simple when simple works.

Bottom Line

Pre-signed S3 uploads are genuinely good infrastructure — less load on your servers, better upload reliability for end users, and the security model is sound if you actually use it. The footgun is treating them as a URL vending machine without specifying what the URL permits. Lock down the content type, enforce the size limit with a POST policy, scope the key prefix, and verify after the fact. That's the whole checklist.

Need help shipping something like this? Get in touch.