Demand planning: a practical guide for ecommerce operations

Table of Contents

New commerce connects

Find out how Linnworks can help grow your business.

Buying too much stock ties up cash and warehouse space. Buying too little costs you sales and customer trust. Most operations managers know this tradeoff firsthand.

On a single channel, a spreadsheet and focused attention can handle demand planning. Across four or five channels, it becomes a data, coordination, and timing problem that doesn’t line up cleanly.

Your data is the forecast

Every forecast is only as good as its inputs. If your inventory data is fragmented across systems, your demand plan will be wrong in ways that are hard to trace back to the source.

The Linnworks 2026 State of Commerce Operations report, which surveyed mid-market retailers with annual revenues between $7.5 million and $100 million across the UK and US, found that only 33.2% of UK retailers and 37.2% of US retailers report excellent inventory visibility across channels and warehouses. Most operate with gaps they call “minor.” Those gaps stop being minor once you’re making reorder decisions on top of them.

With the average mid-market retailer now selling across 4.15 channels in the UK and 4.25 in the US, demand signals come from everywhere. 

A velocity spike on Amazon doesn’t mean the same thing as the same spike on your DTC site. A clearance promotion on one channel distorts the historical data you’ll use for next quarter’s buy. If your inventory sits across a WMS, an OMS, and several channel dashboards that don’t sync in real time, every forecast you produce is built on degraded data.

Fix this by consolidating demand signals before you try to interpret them. Start practically: list your data sources, standardize key fields like SKU, stock level, and channel, and integrate systems where possible via middleware, APIs, or robust imports and exports. Aim to sync sales and inventory data into a single dashboard or database that updates in near real time. If full integration isn’t feasible yet, set up consistent manual exports and reconciliations on a fixed schedule. The goal: one source of truth for all channels, warehouses, and systems before forecasting begins.

State of Commerce Ops Report

Insights from 200+ retailers on automation, inventory visibility, marketplace strategy and global growth.

Inputs that move the needle

Start with what you control.

Historical sales by SKU and channel: 12months minimum; 24 is better for seasonal categories. Weight recent data more heavily for fast-moving SKUs. Strip out anomaly periods (COVID-era distortions, one-off promotions, channel outages) or flag them so they don’t skew your baseline.

Sell-through rate by channel: A product moving fast on eBay and slow on your own site needs different reorder strategies per channel. The blended average hides where you’re actually at risk. This SKU-level view makes a forecast operationally useful instead of just directionally correct.

Supplier lead-time variability is underweighted in most planning. If your supplier usually delivers in 25 days but sometimes takes 45, calculating safety stock off the average will leave you short during every delay cycle. 

Instead, use the 80th percentile lead time for buffer calculations, not the median. To calculate this, line up recent lead times from shortest to longest and pick the value at the 80th percentile. 

For example, if your last 10 orders had lead times between 22-45 days, the 80th percentile is the 8th longest lead time. This single adjustment does more to prevent stockouts than most algorithm tweaks.

Days of inventory on hand (DOH): 45 days of cover sounds comfortable. With a 30-day lead time and variable demand, it’s not. Tracking DOH by SKU across locations shows where you’re truly exposed versus where you’re simply overstocked.

Promotional calendars and competitive pricing intel are useful once your internal data is clean. Most teams add those external inputs on top of data they can’t trust, then wonder why the forecast diverges from reality.

The number you set once and never check

Reorder points and safety stock levels are usually set once during system setup and rarely changed. Six months later, demand has shifted, supplier lead times have moved, and the parameters haven’t.

Reorder point is the inventory level that triggers a new purchase order:

Reorder point = (average daily sales × lead time in days) + safety stock

Safety stock covers demand spikes and lead-time variability. A standard approach uses the difference between your maximum and average lead time, multiplied by average daily sales. If you sell 20 units per day and your supplier sometimes runs 20 days late versus the average, you need 400 units of buffer beyond your reorder point.

Both inputs change. Average daily sales move with seasonality, channel mix, and promotions. Lead times move with supplier disruptions and freight congestion. If you set these in January based on Q4 data, they’re probably wrong by Q3.

Quarterly reorder point reviews for your top 50 SKUs are the minimum. To focus effort, prioritize SKUs for more frequent review based on sales velocity, gross margin, and demand volatility. High-velocity products, bestsellers with big margin impact, or items with lumpy demand should be checked monthly; slower-moving or lower-value SKUs can stay on a quarterly cycle. It feels like overhead until a stockout during peak week wipes out a season’s margin and floods customer service with issues.

The ultimate guide to maximizing profit with SKU-level data

Linnworks and Conjura show how tracking profit and costs at the SKU level drives growth, boosts profitability, and supports data-driven decisions.

linnworks conjura webinar

Seasonal planning starts earlier than you think

If your Q4 buying window requires POs by early August (and it does when lead times from Asia are 90 days), your Q4 demand plan needs to be finalized in July. Most operations teams don’t plan that early. Demand planning gets treated as a finance exercise tied to budget season instead of an operational requirement driven by supplier lead times.

Retailers who get through peak without stockouts aren’t always better forecasters. They build planning timelines backward from first required delivery date, which means they’re buying for November before summer ends.

Whatever your next major sales period is, the purchase orders that support it need to go out before most teams think planning has started.

Where AI-assisted forecasting earns its place

AI in demand forecasting has moved from pilot project to everyday operations. The Linnworks 2026 State of Commerce Operations report highlights demand forecasting as a key area where mid-market retailers are expanding AI use beyond customer support. Retailers that embed AI into operational workflows report better forecasting accuracy and less manual work.

Not every deployment delivers that. AI models trained on fragmented, inconsistent history produce confident outputs that are wrong in the same ways your manual process was wrong.

Retailers getting real value from AI forecasting tools have one thing in common: they fixed their data infrastructure first. Feed a model fragmented inventory records and anomaly-riddled sales history and it will find patterns in noise that shouldn’t exist. Accuracy improvements come from the data quality work that precedes the AI deployment, not the algorithm itself. Before implementing AI, do the basics: deduplicate product and inventory records so each SKU and transaction has a single authoritative entry; standardize fields such as SKU codes, dates, and channel names across sources; and run anomaly checks to flag outliers or errors like sudden unexplained spikes, missing values, or mismatched dates. Clean, standardized, validated data lets AI deliver insight instead of amplifying mistakes.

When inventory data is centralized and current, the difference shows up quickly. Spotlight AI in Linnworks Advanced surfaces purchasing recommendations from live stock levels across all channels, grounded in what’s actually in your warehouse today, not a spreadsheet export from last Thursday. For teams on fragmented data, that capability doesn’t help much. The model outputs with the same confidence either way; one of those outputs is just wrong.

Linnworks demo: how to master inventory management

Master inventory management with Linnworks. Centralize stock and automate updates across all your sales channels effortlessly.

mastering inventory managment with linnworks

Planning cadence that sticks

Your demand plan is only as reliable as your last update. Most teams update less often than they think.

Weekly, monthly, and quarterly cadences serve different purposes and shouldn’t be collapsed:

Weekly: Fast-movers and high-risk SKUs. Check stock cover, flag anything near its reorder point, and catch unexpected channel spikes before they become stockouts. With centralized inventory data, this is a 20-minute review.

Monthly: Full reorder point and safety stock review for your top 100–200 SKUs. Check whether supplier lead times have shifted. Update demand assumptions based on actuals versus last month’s forecast.

Quarterly: Seasonal planning for the next 90 days, with purchase orders mapped to first required delivery dates rather than launch dates. Identify slow-moving or excess stock that must be cleared before the next seasonal intake. Reassess which SKUs need active planning versus passive reorder rules, and confirm whether current supplier lead times match what your system shows.

A forecast built on six-month-old assumptions gives you false confidence in numbers that no longer reflect your business. That’s worse than no forecast at all.

Get a Linnworks demo – automate inventory and orders faster

Book a Linnworks demo and see how it simplifies inventory, orders, and fulfillment. Get started today and optimize your eCommerce operations.

FAQ

What are the first steps for teams with limited integration resources to consolidate data?

Start with an audit, not a purchase. List every system that touches inventory or sales data—your WMS, OMS, channel dashboards, even spreadsheets—and document which fields each one tracks. Then standardize the most critical fields: SKU identifiers, stock quantities, and channel names. Mismatched naming conventions are usually the first thing breaking data consistency, and fixing them costs nothing but time.
From there, even a manual process can work as a bridge. Set up scheduled exports from each system on a fixed cadence—daily or weekly—and consolidate them into a single spreadsheet or lightweight database. It’s not elegant, but a consistent manual process on clean data is more useful for forecasting than an automated sync built on inconsistent records. Use that period to identify which integrations are worth prioritizing once budget or dev time becomes available.

Which middleware or integration tools work well for syncing inventory and sales data across platforms?

The right answer depends on which platforms you’re connecting to. For teams running a mix of marketplaces, a dedicated commerce operations platform like Linnworks Advanced can handle multichannel inventory sync natively, eliminating the need for a middleware layer for many common integrations. For more custom or legacy stacks, tools like Celigo, Boomi, or MuleSoft provide flexible API-based integration, though they require more setup and maintenance.
For smaller operations with tighter budgets, Zapier and Make (formerly Integromat) can handle simpler point-to-point syncs between systems that have native app integrations. The practical limit is latency: most of these tools sync on a delay rather than in real time, which matters when you’re managing fast-moving inventory across four or five channels simultaneously. Whatever you use, confirm the sync frequency before committing. “Near real time” and “every four hours” are not the same thing when a velocity spike on Amazon can drain stock overnight.

How do you know if your data is clean enough to deploy an AI forecasting tool?

Run three checks before you switch anything on. First, deduplication: every SKU should have one authoritative record. If the same product appears under three different codes across your systems, the model will treat them as three separate products and forecast each one incorrectly. Second, field standardization: date formats, channel names, and units of measure must be consistent across all data sources the model will touch. Third, anomaly review: pull 12 months of sales history and look for sudden unexplained spikes, stretches of zero sales that don’t match known stockouts, and any period where data was clearly missing or imputed. Flag those periods before training.
A practical test: if you can’t explain an outlier in your historical data when you look at it, the model can’t either. It will just find a pattern in it that doesn’t exist. AI forecasting tools don’t fail because the algorithm is bad. They fail because the inputs are bad, and the model has no way to know.

What helps teams stick to regular review cadences when operational priorities keep shifting?

Attach the cadence to something that already has a recurring slot. Weekly inventory reviews are much easier to protect when they’re folded into an existing Monday morning standup or end-of-week ops check rather than scheduled as a standalone meeting. The cadence that gets its own calendar invite is the first one cut when things get busy.
For monthly and quarterly reviews, the most effective forcing function is a supplier deadline. When your team knows that Q4 purchase orders need to be placed by August and that August date is already on the calendar, the July demand review becomes a hard dependency, not a best practice. Build backward from your first required delivery dates, put those anchor dates in the calendar at the start of each quarter, and let the review cadence follow the timeline rather than the other way around.
The secondary issue is scope. Reviews fail when they’re too broad. Limit weekly reviews to fast-movers and high-risk SKUs, and set a hard time cap. Twenty minutes with a defined SKU list beats a 90-minute open-ended session that gets deprioritized.

Which benchmark metrics should teams track for inventory visibility and forecast accuracy?

On inventory visibility, the Linnworks 2026 State of Commerce Operations report found that only 33.2% of UK retailers and 37.2% of US retailers report excellent visibility across channels and warehouses. The majority describe their visibility as “good with minor issues.” If your team falls into that majority category, the practical target is to close the gap on whichever channel or warehouse location contributes the most to your stockout and oversell incidents, rather than pursuing broad visibility improvements across everything at once.
For forecast accuracy, a mean absolute percentage error (MAPE) below 20% is a reasonable operational target for mid-market retailers with moderate demand variability. Below 15% is strong. Anything above 30% means either your data quality needs attention or your model isn’t accounting for enough demand signal variability. Seasonal categories will always carry higher error rates than evergreen SKUs, so track MAPE separately by product category rather than as a single blended figure.
On inventory health, days of inventory on hand (DOH) should be benchmarked against your actual lead times, not an industry average. If your supplier lead time is 30 days and your DOH drops below 45, you’re inside your safety buffer, and a single demand spike could put you at risk. Track DOH by SKU and flag anything where the cover falls below twice your standard lead time.