Deployer Error: Runtime exited with error: signal: killed
Symptoms
- A bundle deployment fails in the Deployer UI with the error:
Runtime exited with error: signal: killed - The bundle passes
npx fusion verifylocally but fails consistently when deployed to sandbox or production - The deployment log shows no other error before the process is terminated
Cause
The PageBuilder Engine compilation step runs inside an AWS Lambda with a hard limit of 8 GB of RAM. When peak memory usage during compilation crosses that limit, AWS kills the process. The Deployer surfaces this as signal: killed because no more specific error is available at that point.
Memory usage during compilation is not a fixed number. It grows with the size and complexity of your bundle. The main factors are:
- How many output types you have defined. Each output type becomes a separate webpack entry point, and all entry points are compiled in a single pass and held in memory at the same time. Memory usage goes up directly with the number of output types.
- How large and deep your npm dependency tree is. For each output type, webpack walks and bundles every imported npm package, except React, prop-types, and a small set of packages the engine already provides. Large UI libraries with many transitive dependencies add up quickly.
- How many components you have. More features, chains, layouts, and content sources mean a larger webpack module graph.
Solution
Option 1: Reduce the number of output types
This is the most impactful change you can make. Removing a single output type removes one entire webpack entry point and all the components and dependencies it pulls in. Check your bundle’s components/output-types/ directory and remove any output types that are not actively used in production.
Option 2: Audit and prune npm dependencies
Go through your package.json and look for:
- Large UI component libraries you use partially. Import only the specific components you need rather than the full package.
- Packages listed under
dependenciesthat are only needed during local development or build time. Move those todevDependencies. - Duplicate packages that serve the same purpose.
Option 3: Check for circular dependencies
Circular dependencies can cause webpack to process the same module graph multiple times, which pushes memory usage up fast. Use madge to detect circular imports in your bundle:
npx madge --circular --extensions js,jsx,ts,tsx src/Fixing even one circular dependency can bring the peak memory down meaningfully.
Option 4: Simulate the memory limit locally
You can run a local build under the same memory constraint the Lambda has before you deploy:
NODE_OPTIONS="--max-old-space-size=8192" npx fusion buildIf the build gets killed locally with this flag set, it will fail in production too. Adding this to your CI/CD pipeline catches the problem before it reaches the Deployer.
Consider splitting your bundle with Micro Experiences
If you keep hitting this limit, the bundle has probably outgrown what individual fixes can solve. Removing an output type or pruning packages buys time. It does not change the underlying problem of a single bundle doing too much.
Micro Experiences (MX) lets you run multiple PageBuilder instances, each with its own smaller codebase. You split the work across bundles. A subscriptions bundle, a sports bundle, a default bundle for everything else. Each one compiles faster, fits more comfortably under the Lambda memory and size limits, and can be deployed by a separate team without touching the others.
If you are regularly working around this limit on every deploy, it is worth looking at MX as a longer-term approach.
- Introducing Micro Experiences
- Planning Your Micro Experience Migration
- Micro Experiences Developer Guide