Users kept asking for PDF support. The image converter already handled PNG to JPG, WebP to whatever—but PDFs were a gap in the functionality.
The constraint: everything stays in the browser. No server uploads. Privacy is the whole point of these tools, so multi-page PDFs need to convert entirely client-side.
The Architecture
PDF rendering in browsers requires PDF.js—Mozilla's JavaScript PDF renderer. It's powerful but comes with complexity: a separate worker thread for parsing, canvas rendering for each page, and memory management that can crash tabs if you're not careful.
The key insight: single-page PDFs should return a direct image file. Multi-page PDFs need packaging—we went with ZIP. Nobody wants to download 47 separate images.
The Worker Problem
PDF.js needs a separate worker file to handle the heavy parsing work off the main thread. The naive approach loads it from a CDN, but this breaks immediately if you have Content Security Policy headers. And you should have CSP headers for any production application.
The solution uses a postinstall script in the project's package configuration. When dependencies are installed, the script automatically copies the worker file from the PDF.js package into the public folder. This keeps everything on the same origin, avoids CSP complications, and ensures the worker version always matches the library version.
Lazy Loading for Performance
PDF.js weighs in at about 400 kilobytes when parsed. Users converting PNG to JPG shouldn't pay that cost—they never need PDF functionality. The solution uses dynamic imports that only load the library when someone actually uploads a PDF.
A singleton pattern ensures the library loads only once. The first PDF upload triggers the import, configures the worker path, and caches the library reference. Subsequent conversions reuse the cached instance without re-downloading.
Server-Side Rendering Compatibility
Next.js tries to render everything server-side first, but PDF.js depends on browser APIs that don't exist in Node. The canvas element, web workers, and various DOM APIs simply aren't available during server rendering.
The lazy loading function includes an explicit check for the window object. If it's undefined—meaning we're in a server environment—the function throws an error early rather than letting PDF.js fail mysteriously. This makes debugging straightforward: the error message clearly indicates that PDF functionality requires a browser context.
The Conversion Logic
The core conversion function handles both single and multi-page PDFs with different output strategies. After loading the PDF and counting its pages, the logic branches.
For single-page documents, the function renders the page to a canvas, converts it to the target format, and returns the image directly. The user gets a single file download with no extra packaging overhead.
For multi-page documents, the function uses JSZip—also lazy-loaded—to bundle all rendered pages into a single archive. Each page renders in sequence, gets added to the ZIP file with a numbered filename, and the final archive downloads as one convenient package.
Canvas Rendering and DPI
Each PDF page renders to an HTML canvas element before conversion to an image format. The DPI setting controls the output resolution. PDF's native resolution is 72 DPI—screen quality. For most uses, 150 DPI provides good clarity. Print-quality output uses 300 DPI, though file sizes increase substantially.
The canvas context configuration matters for different output formats. JPEG doesn't support transparency, so enabling the alpha channel wastes memory. PNG and WebP benefit from alpha support when the PDF contains transparent elements. The rendering function detects the target format and configures the canvas appropriately.
Before rendering, the function fills the canvas with white. This ensures PDFs with transparent backgrounds produce clean images rather than showing the canvas's default black background where transparency existed.
Browser Canvas Limits
Browsers impose maximum canvas dimensions. Chrome caps around 16,384 pixels per side. A 300 DPI render of an A4 page hits about 3,500 pixels—comfortably safe. But high-resolution renders of larger documents could exceed these limits.
The implementation includes dimension checking before rendering. If the calculated canvas size would exceed browser limits, the function scales down proportionally while maintaining the aspect ratio. Users get a working image at slightly reduced resolution rather than a crashed browser tab.
Progress Feedback
Multi-page conversions take noticeable time. A 47-page document needs 47 separate canvas renders and image conversions. Without feedback, users might think the tool has frozen.
The conversion function accepts a progress callback that fires after each page completes. The UI subscribes to these updates and displays the current page number alongside the total count. Users see exactly what's happening: "Converting page 3 of 47" progresses steadily until completion.
Docker Deployment Considerations
The postinstall script that copies the worker file needed adjustments for Docker builds. In a multi-stage Dockerfile, the dependencies stage runs the install command, but the public folder might not exist yet in that build context.
The solution creates the public directory explicitly before running the package installation. This ensures the postinstall script has somewhere to copy the worker file. An additional explicit copy command in the builder stage provides belt-and-suspenders reliability—if postinstall fails for any reason, the worker still ends up where it needs to be.
UX Simplification
The original implementation included a page selector—users picked which specific page to convert. This seemed like a reasonable feature during initial development, but user feedback was clear: people want all pages converted at once.
Removing the page selector simplified the code substantially. No more tracking selected page state, no validation for page numbers, no increment and decrement buttons. The state interface shrank, the UI became cleaner, and the user experience improved. Sometimes the best feature is the one you remove.
Memory Management
PDF.js creates document references that need explicit cleanup. Without proper disposal, converting several large PDFs in a single session gradually consumes memory until the browser tab crashes.
The solution wraps conversion logic in a try-finally block. Regardless of success or failure, the finally block calls the destroy method on the PDF document reference, releasing the memory it held. The same pattern applies to ImageBitmap objects used in regular image conversions—each needs explicit closing to prevent leaks.
Generated File Handling
The worker file is generated during installation, not committed to source control. Adding it to version control would create unnecessary repository bloat and potential version mismatches if someone forgets to update it after upgrading PDF.js.
The gitignore configuration excludes the worker file from commits. Fresh clones work correctly because the postinstall script runs automatically during dependency installation, copying the worker to its expected location. The file appears where it needs to be without polluting the repository.
Lessons Learned
Browser-based document processing is viable but requires careful attention to several concerns.
A 10-page PDF converts in about 3 seconds at 150 DPI. Fast enough to feel responsive, slow enough that the progress indicator earns its keep. The privacy-first approach—never uploading files to a server—comes with real engineering complexity, but users appreciate knowing their documents never leave their device.