PrimeStack.
Engineering·Dec 5, 2025

Micro-Frontends: A Practical Guide for Large Teams

Scaling frontend development across multiple teams? Discover how micro-frontends can improve velocity and reduce deployment risks.


Micro-frontends extend the principles of microservices to the browser. Instead of a single monolithic frontend application owned by a single team, you decompose the UI into independently deployable pieces, each owned by a different team, built with whatever technology serves that team best, and composed into a unified product at runtime.

The promise is genuine: independent deployments, isolated failures, team autonomy. But so are the costs. Before your team commits to this architecture, it is worth understanding exactly what you are signing up for.


Table of Contents


What Are Micro-Frontends and When Does the Pattern Make Sense?

A micro-frontend is a vertically sliced piece of a web application: it owns everything from the UI components down to the API calls for its domain. A team building the checkout experience owns the checkout micro-frontend — the React components, the state management, the API integration, and the deployment pipeline. No other team touches that codebase for routine checkout changes.

The pattern makes sense when you have:

  • Multiple independent teams working on the same frontend. If two teams are regularly blocked by merge conflicts or deployment coordination, micro-frontends can remove the coupling.
  • Significantly different release cadences. A marketing team that deploys landing page changes dozens of times a day should not be blocked by the engineering team's bi-weekly release cycle.
  • Domain complexity that justifies bounded context isolation. If your product has genuinely distinct domains (e-commerce: catalog, checkout, account, order history), and each domain has dedicated team ownership, vertical decomposition is natural.
  • Legacy migration needs. If you need to incrementally replace a legacy application with a modern stack, micro-frontends let you wrap the legacy code and replace it domain by domain.

The pattern does not make sense for:

  • Teams with fewer than 30–50 frontend engineers. The overhead is not justified.
  • Products with high cross-domain UI dependencies, where components from many domains appear on the same page and must share significant state.
  • Organizations without the infrastructure maturity to run multiple independent CI/CD pipelines.

The Three Main Implementation Approaches

iframes

The oldest approach. Each micro-frontend is a separate application loaded in an <iframe>. The shell application lays out the page and composes iframes by URL.

Iframes provide genuine isolation: CSS cannot leak across the frame boundary, JavaScript errors are contained, and teams can use completely different stacks. But the user experience cost is significant: iframes do not resize to content height natively, accessibility is degraded, performance suffers from separate browsing contexts, and routing is complex. In 2026, iframes are appropriate only for embedding genuinely external content (payment widgets, third-party tools) where isolation from the host is a security requirement.

Web Components

Each micro-frontend exposes a custom element (e.g., <checkout-widget>) that encapsulates its implementation. The shell application places these elements in the DOM like any other HTML element.

Web Components provide encapsulation through Shadow DOM (CSS isolation) and a standardized browser-native interface that does not depend on any framework. The weakness: framework interoperability is still awkward. React renders poorly into Shadow DOM without workarounds, and custom event communication adds ceremony. For teams committed to a single framework stack, Web Components add complexity without proportional benefit.

Module Federation

Module Federation is a Webpack (and Vite, via plugin) feature that lets a running application dynamically load code from a remote application's build output at runtime. This is the dominant approach for React-based micro-frontend systems.

Each micro-frontend exposes components or entire route trees as "remotes." The shell application (the "host") loads these remotes at runtime and renders them as if they were local modules.


Webpack Module Federation Deep Dive

Module Federation turns each micro-frontend's build output into a JavaScript module graph that other applications can consume at runtime. The key concepts:

Exposed modules — each remote declares what it makes available:

// webpack.config.js for the "checkout" micro-frontend
const { ModuleFederationPlugin } = require("webpack").container;

module.exports = {
  plugins: [
    new ModuleFederationPlugin({
      name: "checkout",
      filename: "remoteEntry.js",
      exposes: {
        "./CheckoutPage": "./src/pages/CheckoutPage",
        "./CartSummary": "./src/components/CartSummary",
      },
      shared: {
        react: { singleton: true, requiredVersion: "^18.0.0" },
        "react-dom": { singleton: true, requiredVersion: "^18.0.0" },
      },
    }),
  ],
};

Host configuration — the shell app declares its remotes and where to find their entry points:

// webpack.config.js for the shell app
new ModuleFederationPlugin({
  name: "shell",
  remotes: {
    checkout: "checkout@https://checkout.example.com/remoteEntry.js",
    catalog: "catalog@https://catalog.example.com/remoteEntry.js",
  },
  shared: {
    react: { singleton: true, requiredVersion: "^18.0.0" },
    "react-dom": { singleton: true, requiredVersion: "^18.0.0" },
  },
});

Consuming a remote in the shell:

import React, { lazy, Suspense } from "react";

const CheckoutPage = lazy(() => import("checkout/CheckoutPage"));

export function App() {
  return (
    <Suspense fallback={<div>Loading checkout...</div>}>
      <CheckoutPage />
    </Suspense>
  );
}

Shared Dependencies and Versioning

The shared configuration is critical. Without it, every micro-frontend bundles its own copy of React, and you end up with multiple React instances on the page — which breaks hooks.

With singleton: true, Module Federation ensures only one instance of React is loaded, regardless of how many micro-frontends are on the page. The version negotiation algorithm picks the highest compatible version.

The danger: version conflicts. If the shell requires react@^18.2.0 and a remote requires react@^17.0.0, they are incompatible. The remote will fall back to loading its own copy, breaking the singleton guarantee. Establish a dependency versioning contract across all micro-frontend teams and enforce it in CI.


The Routing Problem and Solutions

Routing is the hardest problem in micro-frontend architecture. Each micro-frontend typically uses its own router, but the browser has one URL. You need one authoritative source of routing truth.

Shell App Pattern

The shell application owns top-level routing. It maps URL prefixes to micro-frontends:

// Shell routing — maps paths to dynamically loaded micro-frontends
const routes = [
  { path: "/catalog/*", component: lazy(() => import("catalog/CatalogApp")) },
  { path: "/checkout/*", component: lazy(() => import("checkout/CheckoutApp")) },
  { path: "/account/*", component: lazy(() => import("account/AccountApp")) },
];

Each micro-frontend handles sub-routing within its prefix. The catalog app owns all routes under /catalog/. This clean separation means teams never conflict on routing configuration.

Path-Based vs. Domain-Based Routing

Path-based routing (example.com/catalog, example.com/checkout) keeps everything on one origin, simplifying cookie sharing, CORS, and session management. A reverse proxy (Nginx, CloudFront) routes requests to the appropriate micro-frontend's server.

Domain-based routing (catalog.example.com, checkout.example.com) provides stronger isolation and makes it visually clear which team owns what, but introduces cross-origin complexity: cookies need SameSite=None; Secure, shared authentication requires a token exchange mechanism, and absolute URLs must be used throughout.

For most teams, path-based routing is strictly easier to operate. Choose domain-based routing only if you have a specific reason for cross-origin isolation.


Shared Design System Without Tight Coupling

Every micro-frontend needs UI primitives: buttons, inputs, modals, typography. If each team builds these independently, you get UI inconsistency and duplicated effort. If a central team owns a shared component library, every micro-frontend takes a dependency on it, and updates to the library can break multiple micro-frontends simultaneously.

The solution is versioned, independently deployable design system packages.

The design system team publishes to a private npm registry (or GitHub Packages). Each micro-frontend pins to a specific version:

{
  "dependencies": {
    "@company/design-system": "2.4.1"
  }
}

Teams upgrade on their own schedule. Breaking changes follow semantic versioning. The design system team maintains a migration guide and a compatibility matrix.

The key discipline: the design system provides primitives (tokens, base components) and avoids encoding business logic. The moment a component in the design system needs to know about "orders" or "users," it has violated its boundary.

Share design tokens (colors, spacing, typography) via CSS custom properties. This allows the design system to evolve the visual language without forcing component library updates.


Cross-Micro-Frontend Communication

Micro-frontends should be loosely coupled. Direct imports between them defeat the purpose of the architecture. Use one of two communication patterns:

Custom Events

The browser's native event system is an ideal decoupled communication channel:

// Checkout emits an event when a purchase completes
window.dispatchEvent(
  new CustomEvent("checkout:purchase-complete", {
    detail: { orderId: "ord_123", total: 149.99 },
    bubbles: true,
  })
);

// Analytics micro-frontend listens for the event
window.addEventListener("checkout:purchase-complete", (event) => {
  const { orderId, total } = (event as CustomEvent).detail;
  trackPurchase(orderId, total);
});

Custom events work without any shared dependency and are easy to test in isolation. The downside: they are fire-and-forget. You cannot wait for a response without building a request/response protocol on top.

Shared State Store

For state that genuinely needs to be shared across micro-frontends — user authentication, shopping cart contents, global notifications — use a shared singleton store accessed via a shared dependency.

The store is published as a package that micro-frontends import. Because it is marked as singleton in the Module Federation shared config, only one instance runs on the page:

// @company/shared-state — singleton store package
export const cartStore = create<CartState>((set) => ({
  items: [],
  addItem: (item) => set((state) => ({ items: [...state.items, item] })),
  removeItem: (id) => set((state) => ({ items: state.items.filter((i) => i.id !== id) })),
}));

Keep shared state minimal. If a state value is only needed within one micro-frontend's domain, it should live there. The shared store is for cross-cutting concerns only.


CI/CD for Independent Deployment

Independent deployment is the core value proposition of micro-frontends. Each micro-frontend has its own CI/CD pipeline that deploys without coordinating with other teams.

A typical pipeline per micro-frontend:

  1. Build — compile and bundle with Webpack/Vite, output remoteEntry.js and static assets.
  2. Test — unit tests, component tests, and contract tests.
  3. Publish static assets — upload to CDN (S3 + CloudFront, or Vercel, Cloudflare Pages).
  4. Update service registry — notify the shell application that a new version of this micro-frontend is available at its CDN URL.
  5. Smoke test — run integration tests against the staging composition.

The shell application does not need to redeploy when a micro-frontend updates. The shell loads remote entry URLs at runtime, so deploying a new remoteEntry.js to the CDN is sufficient.

Version your remote entry URLs if you need rollback capability:

https://cdn.example.com/checkout/v2.4.1/remoteEntry.js

The shell can be configured to pin to a specific version or always load the latest (latest/remoteEntry.js). For production, pin by default and automate version bumps through a central configuration service.


Testing Strategies

Unit and Component Testing

Each micro-frontend tests its own components in isolation, as a standard React application would. No special setup is required.

Contract Testing

This is the most important test category for micro-frontends. A contract test verifies that the interface between two micro-frontends — the shape of custom events, the API of exposed components, the structure of shared state — remains stable as each side evolves independently.

Use Pact for event/API contract testing:

// Checkout's contract: what it emits
describe("checkout:purchase-complete event contract", () => {
  it("emits an event with orderId and total", async () => {
    // Trigger a purchase in the checkout micro-frontend
    // Assert the emitted event matches the documented schema
    expect(emittedEvent.detail).toMatchObject({
      orderId: expect.stringMatching(/^ord_/),
      total: expect.any(Number),
    });
  });
});

Run contract tests in every pipeline. A failed contract test should block deployment just as a failed unit test would.

Integration Testing

Composition-level integration tests run against the full assembled application. These tests are expensive and slow — use them sparingly for critical user journeys (login, checkout, primary CTA flows) rather than comprehensive coverage.


The Hidden Costs

Every architecture involves trade-offs. Micro-frontends have costs that are easy to underestimate before you commit.

Increased infrastructure complexity. You now operate N deployment pipelines, N CDN configurations, and N separate monitoring setups instead of one. The operational overhead is real and scales with the number of micro-frontends.

Bundle duplication risk. Without careful shared dependency configuration, the same library (e.g., lodash, date-fns, a UI component library) is bundled multiple times and shipped to the user. Module Federation mitigates this but requires active maintenance of the shared config across teams.

Debugging across boundaries is significantly harder. When a bug spans two micro-frontends — an event emitted by one is mishandled by another — you need correlated distributed tracing to diagnose it. Setting this up requires investment.

Organizational overhead. Micro-frontends only work if teams agree on and enforce contracts: event schemas, component APIs, version compatibility, shared state shape. This requires cross-team coordination infrastructure (design system team, platform team, documented contracts) that takes real engineering effort to maintain.

Initial load performance can degrade. Loading multiple remoteEntry.js files adds network round trips. HTTP/2 multiplexing reduces this, but poorly configured micro-frontends can increase Time to Interactive relative to a well-optimized monolith.


Decision Framework: Do You Actually Need Micro-Frontends?

Answer these questions before committing to the architecture:

1. How many engineers are working on the frontend? Below 30: a well-structured monorepo with clear module boundaries will serve you better. 30–100: micro-frontends may be justified if teams are consistently blocked by each other. 100+: the organizational overhead is likely justified.

2. Are teams actually blocked by deployment coupling today? If teams can deploy independently in your current architecture without coordination, you do not have a problem to solve.

3. Do you have genuinely independent domains? Micro-frontends work best when UI domains map cleanly to product domains with minimal shared state. If most pages need data from multiple domains, the integration cost is high.

4. Do you have the infrastructure maturity to support this? Independent deployment requires robust CI/CD, CDN management, feature flags, and observability. If you are still doing manual deployments or do not have distributed tracing, micro-frontends will compound your operational pain.

5. What is your migration path if this does not work out? Micro-frontends are hard to undo. A monorepo that becomes a monolith is recoverable. A distributed frontend that needs to be consolidated requires a significant rewrite.

If you answered "no" or "not yet" to most of these, invest in a well-structured monorepo with strong module boundaries, clear ownership conventions, and excellent CI/CD instead. You will get 80% of the organizational benefit at 20% of the complexity cost.

Micro-frontends are a solution to a real problem — but only when you actually have that problem.