· Engineering  · 19 min read

7× Faster E2E Tests: Solving a Laravel Session Bug That Looked Like Flaky Playwright Tests

How a session architecture problem masquerading as flaky Playwright tests brought a Laravel CI suite to its knees — and how per-worker sessions, a seed API, and a tuned PHP server fixed it.

How a session architecture problem masquerading as flaky Playwright tests brought a Laravel CI suite to its knees — and how per-worker sessions, a seed API, and a tuned PHP server fixed it.

If you’re new to Playwright or want a primer on why E2E testing matters, the article I wrote for Rocksoft — End-to-end testing with Playwright — covers the fundamentals. This article picks up where that one leaves off: a real case study of building a parallel Playwright suite for a Laravel + Inertia.js application, and the session architecture problem that made the whole thing much harder than expected.


Starting point

The application is a comprehensive dashboard — think services, customers, vehicles, invoices, multi-user workflows. Around 48 spec files existed from a previous effort, but the suite was in rough shape:

  • No CI integration
  • No stable authentication strategy
  • No parallelization — 1 worker only
  • Runtime: over 34 minutes, with a large portion of tests either skipped or failing intermittently

The goal was to end up with a test suite that runs reliably in CI and finishes in a reasonable time. The path there involved fixing individual tests, rebuilding the auth strategy from scratch, and diagnosing two non-obvious infrastructure bottlenecks.


Project structure

Before getting into what broke, here’s the directory layout that emerged from the stabilization pass:

tests/e2e/
├── *.spec.ts              # Test specifications (~48 files)
├── pages/                 # Page Object classes
│   ├── base.page.ts       # Shared navigation & wait methods
│   ├── service.page.ts
│   ├── customer.page.ts
│   └── ...
├── helpers/               # Reusable action flows
│   ├── auth.helper.ts
│   ├── seed.helper.ts
│   ├── service-form.helper.ts
│   └── ...                # 20+ domain-specific helpers
├── selectors/             # Centralized element selectors
│   ├── service.selectors.ts
│   ├── customer.selectors.ts
│   └── ...
├── fixtures/
│   ├── auth.fixture.ts    # Per-worker session injection
│   └── console-monitor.fixture.ts
├── config/
│   └── timeouts.ts
├── types/
│   └── selectors.ts
└── global-setup.ts        # Pre-test auth (runs once per worker)

Selectors

All element selectors live in typed const objects, one file per domain:

// selectors/service.selectors.ts
export const ServiceSelectors = {
  createButton: '#filter-action-button',
  customerSelect: '#service-form-customer-select',
  vehicleSelect: '#service-form-vehicle-select',
  checkInButton: '#check-in-button',
  navigation: {
    attachments: 'serviceAttachmentsTab',
    invoice: 'service-invoice-tab',
  },
} as const;

The project uses Intercom for in-app support — tooltips and product tours that open contextual help articles. Intercom targets UI elements by their id attributes, so stable IDs were already a requirement in the markup. E2E selectors could reuse them without adding any test-specific attributes. data-testid is the standard Playwright recommendation, but it means scattering extra attributes across your templates for the sole purpose of testing. When the IDs are already there for another reason, it’s one less thing to maintain.

Page Object Model

Each area has a class extending BasePage, which handles shared navigation and network waiting. Individual page objects only define their own interactions:

// pages/service.page.ts
export class ServicePage extends BasePage {
  async navigate() {
    await super.navigate('/services');
  }

  async navigateToSection(section: keyof typeof ServiceSelectors.navigation) {
    const selector = ServiceSelectors.navigation[section];
    await this.page.locator(`#${selector}`).click();
    await this.waitForNetworkIdle();
  }
}

No raw page.locator() calls scattered across spec files.

Helpers

The distinction between Page Objects and Helpers is worth drawing explicitly. A Page Object represents a page or section of the application — it owns navigation to that page and exposes interactions as methods. A Helper represents a reusable multi-step flow that may span multiple pages or multiple Page Objects.

If you’re describing where something is in the app, it belongs in a Page Object. If you’re describing how to do something that appears across multiple tests, it belongs in a Helper.

// helpers/service-form.helper.ts
export async function fillServiceForm(page: Page, data: ServiceFormData) {
  await page.locator(ServiceSelectors.customerSelect).click();
  await page.locator('input').fill(data.customerName);
  await page.locator('#service-form-customer-select-dropdown div').first().click();
  // ...
}

SeedHelper, for example, isn’t tied to any page — it calls the seed API and returns created record IDs. ServicePage is a Page Object tied to /services. A test composes both: seed to create data, Page Object to navigate, Helper to fill the form. Each layer has one responsibility.


Phase 1: Getting tests to pass

The first task was making the 48 existing spec files actually pass. Most didn’t — stale selectors, changed routing, timing assumptions that had long since broken. A stabilization pass fixed them one by one.

The more important change was rewriting tests that depended on existing database state. If a test assumes a specific customer record exists, it can only run after whichever test created that record — which forces sequential execution. Rewriting each test to create its own data removes that dependency. This turned out to be a prerequisite for parallelization: a test that requires shared state can’t safely run alongside other tests that may modify it.


Phase 2: The session architecture problem

TL;DR: Playwright’s standard storageState pattern shares one session cookie across all workers. With Laravel’s file-based sessions, concurrent workers corrupt each other’s CSRF tokens via flock() contention. The fix: authenticate N times in global-setup.ts — once per worker — and bind each worker to its own session file via testInfo.parallelIndex.

This is the part that looked like a flaky test problem but wasn’t.

What Playwright’s docs recommend

The standard approach is to authenticate once in global-setup.ts, save the browser state to a file, and load it for every test via storageState in playwright.config.ts:

// playwright.config.ts — the standard approach
use: {
  storageState: 'tests/e2e/auth.json',
}

It’s a clean pattern. With one worker, it works perfectly.

What breaks with multiple workers

The moment I increased workers to 2 or 4, tests started failing intermittently: login page appearing mid-test, 419 CSRF errors, redirects unrelated to whatever the test was actually doing.

The root cause: Laravel’s SESSION_DRIVER=file. All workers shared the same session cookie from auth.json. With 4 workers running concurrently, each sent the same session ID to the server. Laravel reads the session file, potentially modifies it (regenerating CSRF tokens, updating flash data), and writes it back. The OS serializes these via flock(), but last-writer-wins: whatever CSRF token worker 3 wrote gets overwritten by worker 4 a millisecond later. Worker 3’s next request then fails with 419.

The failures were non-deterministic and looked exactly like timing issues. The natural instinct was to add more waits and retries. That didn’t help — because the problem was at the server level, not in the tests.

What I tried that didn’t work

  • More retries and longer timeouts — masked symptoms, didn’t fix the cause
  • Clearing cookies before re-login — broke CSRF
  • Posting directly to /login via page.requestpage.request has its own separate cookie jar, doesn’t authenticate as the browser session
  • Various login timing hacks — irrelevant to the actual problem

Every fix that targeted the test code was chasing a symptom.

The fix: per-worker sessions

Each worker needs its own session — so no two workers ever write to the same server-side session file.

global-setup.ts now logs in N times before any tests start — once per configured worker:

// global-setup.ts
const workers = config.workers ?? 4;

for (let i = 0; i < workers; i++) {
  const storageFile = path.join(authDir, `worker-${i}.json`);
  await loginFresh(email, password, baseUrl, storageFile);
}

Each login creates a distinct PHP session. Worker 0 holds session A, worker 1 holds session B — they never touch each other’s session file.

A custom fixture then assigns each worker its own storage file at runtime via testInfo.parallelIndex:

// fixtures/auth.fixture.ts
export const test = base.extend<{}, { workerStorageState: string }>({
  workerStorageState: [async ({}, use, testInfo) => {
    const storageFile = path.join(authDir, `worker-${testInfo.parallelIndex}.json`);
    await use(storageFile);
  }, { scope: 'worker' }],

  storageState: async ({ workerStorageState }, use) => {
    await use(workerStorageState);
  },
});

The scope: 'worker' is key — the fixture initializes once per worker process, not once per test. Spec files just import test from the fixture:

import { test, expect } from './fixtures/auth.fixture';

test('creates a new service', async ({ page }) => {
  // page is already authenticated — no beforeEach needed
});

No login boilerplate. No isLoggedIn() checks. Authentication is infrastructure, not test code.

As a belt-and-suspenders measure, CI also switches to database sessions (SESSION_DRIVER=database). Database sessions replace file I/O with atomic SQL operations — no flock(), no last-writer-wins. This is CI-only; production and local environments are untouched.

Rate limiting in global-setup

global-setup.ts logs in N times in rapid succession — once per worker — before any tests run. Depending on how your rate limiter is configured, those back-to-back login requests can trip it. The fix is a single CI environment variable:

RATE_LIMIT_ENABLED=false

This is safe to disable in a controlled CI environment and avoids mysterious login failures that have nothing to do with the session problem.

pressSequentially instead of fill for Vue + Inertia.js forms

This isn’t related to the parallel problem, but it’s a real gotcha that surfaced during the login stabilization work. Playwright’s fill() sets the input value directly — it doesn’t fire individual keyboard events. With an Inertia.js form backed by a Vue v-model, fill() can race with the component’s onblur handler, which resets the local state before Inertia’s useForm picks up the change. The result: POST /login arrives with an empty body.

The fix is to type keystroke-by-keystroke:

await emailInput.pressSequentially(email, { delay: 10 });
await passwordInput.pressSequentially(password, { delay: 10 });
// wait for Vue microtask queue to flush v-model into Inertia useForm
await page.waitForTimeout(500);

pressSequentially fires individual input events that Vue v-model commits synchronously. The short delay mimics realistic typing cadence and avoids the race. This applies to any Vue component with an onblur handler that modifies form state.


Phase 3: PHP’s built-in server is single-process by default

Once the session problem was solved, another bottleneck appeared: php artisan serve handles one request at a time.

With 4 Playwright workers each making concurrent requests — navigating pages, submitting forms, calling the seed API — all those requests queue behind each other. A login that should take 200ms takes 2 seconds because three other requests are ahead of it. Playwright’s timeout fires. The test “fails.”

The fix is one environment variable:

PHP_CLI_SERVER_WORKERS=16 php artisan serve --host=127.0.0.1 --port=8000 &

This tells PHP to spawn 16 worker processes. 4 Playwright workers generating burst traffic is no longer a bottleneck.

This only matters in CI. Locally, Herd and Valet use nginx + PHP-FPM and handle concurrency correctly.


Phase 4: Seed API — replacing UI-driven test setup

The original tests created their own data by navigating the UI: go to customers, click “New”, fill the form, save, navigate to the created record. That has two problems:

  1. Slow. A multi-step UI setup flow can take 5–10 seconds before the actual test starts.
  2. Fragile. If the setup UI breaks, the test fails — not because the feature under test is broken.

The solution: a dedicated seed API that creates records directly via PHP and returns their IDs.

POST /e2e/seed/customer  →  { id: 42, name: "E2E Test Company" }
POST /e2e/seed/service   →  { id: 7, customer_id: 42 }
POST /e2e/seed/invoice   →  { id: 19, service_id: 7 }

The routes are protected — only available when APP_ENV is not production. Tests navigate straight to the created resource URL:

const seed = SeedHelper.fromPage(page);
const customer = await seed.createCustomer({ name: 'E2E Test Company' });
const service  = await seed.createService({ customer_id: customer.id });

await page.goto(`/services/${service.id}`);
// test starts here — no setup navigation needed

One important detail: seed calls use page.evaluate(fetch) rather than Playwright’s page.request. The reason: page.request has its own cookie jar separate from the browser session and can’t authenticate as the logged-in user. page.evaluate(fetch) runs inside the browser context and sends the actual session cookie:

const result = await page.evaluate(async ({ url, data }) => {
  const res = await fetch(url, {
    method: 'POST',
    credentials: 'include',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(data),
  });
  return { status: res.status, body: await res.text() };
}, { url, data });

Phase 6: MySQL deadlocks in parallel PHPUnit

Running PHPUnit with --parallel introduced a separate backend problem: MySQL deadlocks during test setup.

Laravel’s RefreshDatabase trait calls migrate:fresh to reset the schema before tests. With 8 parallel workers all calling migrate:fresh simultaneously, MySQL’s DDL operations — DROP TABLE, CREATE TABLE — on shared tables caused lock timeouts:

SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded

The fix: prevent workers from running migrations at all. The database is pre-migrated before the parallel run starts; workers should skip directly to transactions.

// tests/TestCase.php
protected function setUpProcess(): void
{
    RefreshDatabaseState::$migrated = true;
}

setUpProcess() runs once per worker process. Setting $migrated = true tells RefreshDatabase to skip the migration phase and assume the schema is current.


Phase 5: The final playwright.config.ts

A few config choices worth explaining:

export default defineConfig({
  testDir: './tests/e2e',
  fullyParallel: false,  // files in parallel, tests within a file sequentially
  forbidOnly: !!process.env.CI,
  retries: process.env.CI ? 2 : 1,
  workers: 4,
  use: {
    baseURL: process.env.E2E_TEST_URL,
    // No global storageState — each worker authenticates via auth fixture
    trace: 'on-first-retry',
  },
  timeout: 90000,
  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
      testIgnore: ['**/visual.spec.ts'],
    },
    {
      name: 'visual',
      use: { ...devices['Desktop Chrome'] },
      testMatch: ['**/visual.spec.ts'],
      dependencies: ['chromium'],
    },
  ],
  globalSetup: './tests/e2e/global-setup.ts',
});

fullyParallel: false — Playwright can either run all tests in parallel regardless of file, or run files in parallel with tests within each file staying sequential. fullyParallel: false is the safer default for a complex CRUD application: tests within the same spec file often share context (creating a record in one test, then checking it in the next). Parallelizing at the file level gives most of the speed benefit without introducing intra-file ordering issues.

trace: 'on-first-retry' — Traces are recorded only when a test fails and gets retried. This keeps CI artifact sizes small while still capturing a complete browser trace (network, console, screenshots) for any test that actually needed a retry.

Visual tests as a separate project — The visual project runs only visual.spec.ts and lists chromium as a dependency, so functional tests always run first. A visual regression against a page that hasn’t even rendered correctly is a meaningless failure.

CSRF and seed routes — The seed API routes are registered in a separate routes/web/e2e.php and excluded from CSRF verification in VerifyCsrfToken.php. They’re gated by APP_ENV !== 'production' at the route level, so the CSRF exemption only applies in test environments.


CI workflow

The workflow only triggers when E2E-relevant files change — frontend sources, test files, Playwright config. Backend-only changes don’t run the expensive E2E suite. Only the latest push on a branch occupies a runner; in-progress runs are cancelled automatically.

A few things the workflow has to get right before npx playwright test runs:

  • MySQL in-memory — running the database via --tmpfs eliminates I/O as a bottleneck and keeps setup fast.
  • Database sessionsSESSION_DRIVER is patched to database before the server starts, as a second layer of session isolation on top of the per-worker auth files.
  • Rate limiting offglobal-setup.ts logs in N times in rapid succession. Without disabling the rate limiter in CI, those back-to-back login requests can trip it and fail the setup phase before a single test runs.
  • Server readiness checkphp artisan serve returns immediately while still starting. A short curl health-check loop before npx playwright test ensures the login page is actually reachable when global-setup.ts fires.

The Playwright HTML report is uploaded as a CI artifact on every run — including failures — so there’s always a trace-level record of what happened.


The debugging journey

The path to the session fix wasn’t straight. The failures produced symptoms that pointed in completely wrong directions.

What the failures looked like: A test navigates to a page, performs some action, and lands on /login — as if the session expired mid-test. Or a form submission returns 419. Or the test passes locally but fails consistently in CI with 2+ workers. Looks exactly like a flaky test or a timing problem.

First instinct — fix the tests. Add more waits. Increase timeouts. Add retries. Retries helped slightly by masking intermittent failures. The root cause was untouched.

Second instinct — fix the login. Maybe the login itself was fragile. Several attempts followed: clearing cookies before login, posting to /login via page.request, tweaking how the form was filled (fill vs pressSequentially — real bug, unrelated to the parallel issue). None of them fixed the parallel failures.

The breakthrough: Reading the Laravel session driver source and noticing that SESSION_DRIVER=file writes to disk on every request. Cross-referencing with the number of concurrent workers. The flock() contention hypothesis explained everything: why failures were random, why they increased with more workers, why retries helped (retry gets a fresh lock), why it never happened locally with 1 worker.

What actually helped: Instead of looking at test output, look at the server logs. Which requests were failing? What was the session ID across failing requests? Were multiple workers sending the same session cookie? Once those questions had answers, the fix was obvious.

The lesson: When E2E tests fail intermittently and the failures don’t correlate with any specific test logic, look at shared infrastructure — session storage, database state, server process model — before looking at the tests themselves.


Results

MetricBeforeAfter
Workers14
Auth strategy1 shared auth.jsonPer-worker auth-N.json
Session driver (CI)filedatabase
PHP server workers116
Test runtime>34 min (many skipped/flaky)~5 min (~140 tests)
Test data creationUI navigationSeed API + direct URL
Session conflictsYesNo

What made the difference

Every fix, in order of impact:

  • Per-worker auth files — authenticate N times in global-setup.ts, one session per worker; bind each worker to its file via testInfo.parallelIndex
  • Database sessions in CISESSION_DRIVER=database eliminates file-lock contention as a belt-and-suspenders measure on top of session isolation
  • PHP CLI server workersPHP_CLI_SERVER_WORKERS=16 prevents burst requests from queuing behind a single-process server
  • Rate limiting off in CIRATE_LIMIT_ENABLED=false prevents rapid global-setup logins from tripping the limiter before tests even start
  • Seed API — replace UI-driven test setup with direct API calls; tests navigate straight to the created resource URL
  • page.evaluate(fetch) for authenticated requests — use browser context instead of page.request to send the actual session cookie
  • RefreshDatabaseState::$migrated = true — pre-migrate the DB before the parallel PHPUnit run; workers skip straight to transactions
  • fullyParallel: false — parallelize at the file level, preserve intra-file test order
  • Server readiness check in CI — curl loop before npx playwright test; php artisan serve returns before it’s actually ready to accept connections

Bonus: a universal health-check scanner

One of the last additions to the suite was a spec that visits every page in the application and checks for JavaScript errors and Vue warnings in the browser console. No assertions about functionality — just “does this page load without spewing errors?”

// health-check-console-scan.spec.ts
test('no console errors on services page', async ({ page }) => {
  const errors: string[] = [];
  page.on('console', msg => {
    if (msg.type() === 'error') errors.push(msg.text());
  });
  await page.goto('/services');
  expect(errors).toHaveLength(0);
});

This kind of scanner has caught real regressions — a missing translation key, a Vue prop type mismatch, a failed chunk load — that functional tests didn’t cover because the feature still “worked” despite the console noise.


Frequently asked questions

Does this session approach work with Sanctum or JWT?
Yes, with caveats. JWT is stateless — there is no server-side session file, so sharing a token across workers is safe and the per-worker auth fix is unnecessary. Sanctum in cookie-based mode (the default for SPA auth) uses Laravel sessions under the hood, so the same file-lock contention applies. Sanctum in token mode (Authorization: Bearer) is stateless and not affected.
Why page.evaluate(fetch) instead of page.request for the seed API?
Playwright's page.request has its own isolated cookie jar — it doesn't share cookies with the browser context. That means it can't authenticate as the logged-in user. page.evaluate(fetch) runs inside the browser, uses the same session cookie the browser holds, and hits the seed endpoint as the authenticated user. This distinction trips up most developers the first time they try to make authenticated API calls from a test.
Can the seed API accidentally be left on in production?
The routes are registered conditionally — only when APP_ENV is not 'production'. As a second layer, they're in a separate route file (routes/web/e2e.php) that's only loaded in non-production environments. In practice, a production deploy that somehow loaded the e2e routes would also need the correct session cookie to use them, since they rely on standard Laravel auth middleware.
Why fullyParallel: false instead of true?
fullyParallel: true runs every individual test concurrently across all workers. fullyParallel: false parallelizes at the file level — tests within a single spec file run sequentially. For a CRUD application, related tests often share implicit context: one test creates a record, the next verifies it. Intra-file ordering is usually intentional. Parallelizing at the file level gives most of the speed gain without breaking that implicit contract.
Does pressSequentially slow tests down noticeably?
Only in global-setup, where it's used for login. With a 10ms delay per keystroke and a typical email + password of around 30 characters combined, it adds roughly 300ms per login. With 4 workers that's about 1.2 seconds of extra setup time — negligible against a 5-minute suite. Inside actual test spec files, fill() works fine for form fields that don't have the Inertia useForm race condition.
Does this apply if I use Herd or Valet locally?
No — the PHP_CLI_SERVER_WORKERS fix is only needed for php artisan serve, which is single-process. Herd and Valet sit in front of PHP-FPM via nginx, which handles concurrent requests correctly out of the box. The per-worker session isolation still matters locally if you run multiple workers, but the PHP server bottleneck simply doesn't exist.

One more thing: Vue 2, an older Laravel, and why it doesn’t matter

The application this suite covers isn’t running the latest stack — it’s Vue 2 and an older version of Laravel. That’s worth naming, because it’s a common reason teams postpone E2E testing: “we’ll add tests after the rewrite.”

That reasoning has it backwards. A migration from Vue 2 to Vue 3 is exactly the scenario where a solid E2E suite earns its keep. Behavioral regressions during a framework migration are notoriously hard to catch with unit tests — components change internally while the user-facing behavior is supposed to stay the same. A test that clicks through a service form and verifies the saved data doesn’t care which version of Vue is rendering it. It fails when the behavior changes, regardless of why.

The 34-minute flaky suite wasn’t useful. The 5-minute reliable one will be a genuine safety net when the migration starts.


Playwright vs. the alternatives

Laravel Dusk is the obvious starting point for any Laravel project — it’s official, requires no Node.js setup, and tests are written in PHP alongside the rest of the application. For teams that want to stay entirely in the PHP ecosystem, Dusk is a reasonable choice. The tradeoff: it runs on top of ChromeDriver via the WebDriver protocol, which is slower than Playwright’s CDP-based approach, and parallel execution requires more manual work. Worth noting: the session isolation problem described in this article would apply to Dusk equally — it’s a Laravel server-side issue, not a Playwright-specific one.

Cypress was my tool on earlier projects. The interactive test runner is genuinely good and made E2E testing approachable when alternatives were far more painful to set up. But Cypress runs tests inside the browser process, and its internals lean heavily on jQuery — which means selector handling, event dispatching, and certain async patterns don’t always match how a real user’s browser behaves. It was also single-tab only for a long time, making flows that involve redirects, popups, or new contexts awkward. Cross-browser support arrived late and still feels secondary. I’ve moved on.

Selenium / WebdriverIO is where most teams started, and for good reason — Selenium is language-agnostic, battle-tested, and the right answer if you need Java or Python test suites. WebdriverIO is a solid Node.js wrapper that has improved considerably. The fundamental ceiling is the WebDriver protocol itself: every command is a round-trip HTTP request to a browser driver. Playwright bypasses this by speaking CDP (and equivalent protocols for Firefox and WebKit) directly, which is why it’s faster and why features like page.evaluate and network interception work cleanly without workarounds.

Playwright is my current default for anything browser-based. It’s genuinely cross-browser (Chromium, Firefox, WebKit all work well), TypeScript-first, has the best parallelization story of any tool in this space, and its architecture doesn’t compromise — you’re controlling the browser at the protocol level. That said, the tooling landscape moves fast. I’m not dogmatic about it; if something better comes along, I’ll use it.


AI assistance in E2E debugging

One thing that has genuinely changed how I debug test failures: Claude Code with the Playwright MCP server. Instead of staring at 300 lines of CI output trying to reconstruct what happened, you can describe the symptom and get targeted diagnosis — or let the AI drive the browser directly, navigate to the failing state, and read what the console actually says.

The browser console access is particularly useful. Claude Code can capture live console output — errors, Vue warnings, failed network requests — the same signals the health-check scanner collects automatically, but interactively during a debugging session. It’s the difference between reading a crash report and watching the crash happen.

There’s a lot of hype right now around AI writing tests from scratch — “vibe testing,” generating spec files from a prompt and shipping them. That produces exactly the kind of suite this article started with: tests that technically exist but don’t run reliably, don’t isolate state, and fail for reasons that have nothing to do with the feature under test. AI is a genuinely good debugging partner and a fast way to scaffold boilerplate. Engineering thinking — understanding why tests need to be self-contained, why shared session state breaks parallel execution, why symptoms and root causes are different things — is still what makes a test suite actually work.


Playwright documentation

Relevant official docs for the patterns covered in this article:

Back to blog

Related posts

Read more
Own Website vs Google Business Profile

Own Website vs Google Business Profile

Google Business Profile is free and easy to set up — but it has serious limitations. Find out when a website is worth the investment, and when a Business Profile is enough.