File Storage
Upload and serve user files (images, PDFs, avatars) with Supabase Storage and Row Level Security.
File Storage
ScaleRocket uses Supabase Storage for all user-uploaded files. It's built on top of S3, integrated with your existing Supabase auth and RLS, and requires no additional service to set up.
Where to Store What
| Type of file | Where | Why |
|---|---|---|
| Logo, favicon, site illustrations | apps/web/public/ | Static assets, deployed with your code |
| User uploads (images, PDFs) | Supabase Storage (public or private bucket) | Dynamic, per-user, access-controlled |
| Avatars, profile pictures | Supabase Storage (public bucket) | Needs to be viewable by others |
| Sensitive documents | Supabase Storage (private bucket) | Only the owner should access |
| Generated exports, reports | Supabase Storage (private bucket) | Created by the app, downloaded by the user |
Create a Storage Bucket
Via Supabase Dashboard
- Go to Storage in your Supabase Dashboard
- Click New bucket
- Enter a name (e.g.,
uploads,avatars,documents) - Choose Public or Private:
- Public — Anyone with the URL can view the file (good for avatars, product images)
- Private — Requires authentication to access (good for user documents, exports)
- Click Create bucket
Via Migration (Recommended)
Add a migration so the bucket is created automatically when someone sets up your boilerplate:
-- supabase/migrations/00016_storage_buckets.sql
-- Public bucket for user-facing images (avatars, uploads displayed on the site)
INSERT INTO storage.buckets (id, name, public)
VALUES ('uploads', 'uploads', true)
ON CONFLICT (id) DO NOTHING;
-- Private bucket for sensitive documents
INSERT INTO storage.buckets (id, name, public)
VALUES ('documents', 'documents', false)
ON CONFLICT (id) DO NOTHING;Storage RLS Policies
Just like database tables, storage buckets use Row Level Security to control who can read, upload, and delete files.
Public Bucket (e.g., uploads)
Users can upload to their own folder and anyone can view:
-- Anyone can view files in the public bucket
CREATE POLICY "Public read access"
ON storage.objects FOR SELECT
USING (bucket_id = 'uploads');
-- Authenticated users can upload to their own folder
CREATE POLICY "Users can upload own files"
ON storage.objects FOR INSERT
TO authenticated
WITH CHECK (
bucket_id = 'uploads'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Users can delete their own files
CREATE POLICY "Users can delete own files"
ON storage.objects FOR DELETE
TO authenticated
USING (
bucket_id = 'uploads'
AND auth.uid()::text = (storage.foldername(name))[1]
);The key pattern is auth.uid()::text = (storage.foldername(name))[1] — this means each user's files are stored in a folder named after their user ID. User abc123 uploads to uploads/abc123/photo.jpg.
Private Bucket (e.g., documents)
Only the file owner can read and manage:
-- Only the owner can read their own files
CREATE POLICY "Users can read own documents"
ON storage.objects FOR SELECT
TO authenticated
USING (
bucket_id = 'documents'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Users can upload to their own folder
CREATE POLICY "Users can upload own documents"
ON storage.objects FOR INSERT
TO authenticated
WITH CHECK (
bucket_id = 'documents'
AND auth.uid()::text = (storage.foldername(name))[1]
);
-- Users can delete their own documents
CREATE POLICY "Users can delete own documents"
ON storage.objects FOR DELETE
TO authenticated
USING (
bucket_id = 'documents'
AND auth.uid()::text = (storage.foldername(name))[1]
);Upload Files from Your App
Basic Upload
// apps/app/src/lib/storage.ts
import { supabase } from "./supabase/client";
export async function uploadFile(
bucket: string,
file: File,
path?: string
) {
const { data: { user } } = await supabase.auth.getUser();
if (!user) throw new Error("Not authenticated");
// Store in user's folder: {user_id}/{filename}
const filePath = path || `${user.id}/${Date.now()}-${file.name}`;
const { data, error } = await supabase.storage
.from(bucket)
.upload(filePath, file, {
cacheControl: "3600",
upsert: false,
});
if (error) throw error;
// Get the public URL (for public buckets)
const { data: urlData } = supabase.storage
.from(bucket)
.getPublicUrl(data.path);
return {
path: data.path,
url: urlData.publicUrl,
};
}Upload Component (React)
// apps/app/src/components/FileUpload.tsx
import { useState } from "react";
import { Button } from "@saas/ui";
import { uploadFile } from "@/lib/storage";
export function FileUpload({
bucket = "uploads",
onUpload,
}: {
bucket?: string;
onUpload: (url: string) => void;
}) {
const [uploading, setUploading] = useState(false);
const handleChange = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file) return;
setUploading(true);
try {
const { url } = await uploadFile(bucket, file);
onUpload(url);
} catch (err) {
console.error("Upload failed:", err);
} finally {
setUploading(false);
}
};
return (
<div>
<input
type="file"
onChange={handleChange}
disabled={uploading}
className="hidden"
id="file-upload"
/>
<label htmlFor="file-upload">
<Button asChild isLoading={uploading}>
<span>{uploading ? "Uploading..." : "Upload file"}</span>
</Button>
</label>
</div>
);
}Download and Display Files
Get Public URL
For public buckets, the URL is predictable:
const { data } = supabase.storage
.from("uploads")
.getPublicUrl("user-id/photo.jpg");
// data.publicUrl = "https://xxx.supabase.co/storage/v1/object/public/uploads/user-id/photo.jpg"Get Signed URL (Private Buckets)
For private buckets, generate a temporary signed URL:
const { data, error } = await supabase.storage
.from("documents")
.createSignedUrl("user-id/contract.pdf", 3600); // expires in 1 hour
// data.signedUrl = "https://xxx.supabase.co/storage/v1/object/sign/documents/..."Display an Image
<img
src={supabase.storage.from("uploads").getPublicUrl("user-id/avatar.jpg").data.publicUrl}
alt="User avatar"
className="h-16 w-16 rounded-full object-cover"
/>Delete Files
const { error } = await supabase.storage
.from("uploads")
.remove(["user-id/old-photo.jpg"]);To delete multiple files:
const { error } = await supabase.storage
.from("uploads")
.remove([
"user-id/photo1.jpg",
"user-id/photo2.jpg",
"user-id/photo3.jpg",
]);List Files
const { data, error } = await supabase.storage
.from("uploads")
.list("user-id/", {
limit: 100,
offset: 0,
sortBy: { column: "created_at", order: "desc" },
});
// data = [{ name: "photo.jpg", id: "...", created_at: "...", ... }, ...]File Validation
Always validate files before uploading:
const MAX_FILE_SIZE = 10 * 1024 * 1024; // 10MB
const ALLOWED_TYPES = ["image/jpeg", "image/png", "image/webp", "application/pdf"];
function validateFile(file: File): string | null {
if (file.size > MAX_FILE_SIZE) {
return "File is too large. Maximum size is 10MB.";
}
if (!ALLOWED_TYPES.includes(file.type)) {
return "File type not allowed. Use JPEG, PNG, WebP, or PDF.";
}
return null; // valid
}Storage Limits
| Supabase Plan | Storage | Bandwidth |
|---|---|---|
| Free | 1 GB | 2 GB/month |
| Pro ($25/mo) | 100 GB | 250 GB/month |
| Team ($599/mo) | Unlimited | Unlimited |
For most SaaS apps, the Pro plan is more than enough. If you need more, consider offloading large files to a dedicated CDN (Cloudflare R2, AWS S3).
Next Steps
- Configure your database to store file metadata alongside your data
- Set up the credits system if file processing costs credits
- Deploy to production — Storage buckets are created on your production Supabase project via migrations
Done reading? Mark this page as complete.