calendar icon
April 17, 2026
Key Points icon
Key points
  • Mean indicators (kilometers created) • performance indicators (real attendance) — both are necessary
  • Five families of indicators: volume, modal distribution, evolution, seasonality, impact
  • An effective dashboard prioritizes (3-5 strategic indicators), visualizes and contextualizes
  • Common mistakes: measuring only the easy, multiplying without prioritizing, comparing the incomparable, ignoring the qualitative
  • Robust management combines objective quantitative + user feedback + contextual analysis
  • Indicators are only valid if they allow you to make better decisions

The trap of average indicators vs performance indicators

The first mistake in building an indicator system is to confuse the resources deployed and the results obtained.

Mean indicators: what we did

Mean indicators describe public action: kilometers of bike paths created, number of bicycle arches installed, budget devoted to active mobility, number of bicycle driving training sessions organized.

These indicators are necessary to report on service activity, justify the use of budgets and communicate on achievements. They help answer the question: “What did we do? ”

But they don't say anything about the impact. Creating 10 kilometers of bike paths does not guarantee that they will be used. Installing 200 bike racks does not mean they will fill up. Training 500 people to bike does not prove that they will actually start pedaling for their daily trips.

Performance indicators: what has changed

The performance indicators measure the effect produced by public action: number of cyclists on new developments, evolution of the modal share of cycling in travel, reduction of CO₂ emissions linked to transport, improvement of air quality.

These indicators are more difficult to produce. because they require measurement devices (sensors, surveys, mobility data) that are not always in place. They also involve distinguishing what results from public action from what results from other factors (weather, economic context, societal trends).

But they are the ones who make it possible to fly. Knowing that a newly created bike path is used by 300 cyclists per day (result) rather than knowing that it measures 2 kilometers (average) makes it possible to assess the relevance of the investment and to adjust future choices.

The necessary balance

A robust indicator system combines the two approaches :

  • The resource indicators make it possible to monitor the execution of the action programme.
  • Performance indicators make it possible to assess the impact and to guide future decisions

The trap is to stop at indicators of resources, which are reassuring (“we acted”) but which prove nothing about the effectiveness of the action.

The five families of mobility indicators

To effectively manage a mobility policy, it is useful to structure the indicators into five complementary families. Each one answers a different strategic question.

Family 1: Volume — How many people are using?

Volume indicators measure the actual use of active mobility infrastructures: number of cyclists on a cycle path, number of pedestrians on a path, number of users on a greenway.

Why it's important: These figures make it possible to verify that the infrastructures created meet a real need, to size future developments and to produce objective data for financing applications.

Examples of indicators:

  • Average number of visitors per day on the main cycle routes
  • Total number of annual greenway crossings
  • Monthly evolution of pedestrian traffic in peaceful areas

How to produce them: Automatic sensors installed on strategic axes, occasional manual counts for validation, mobility surveys for home-work trips.

Limit to know: Volume alone says nothing about the quality of the experience, user satisfaction or environmental impact. It is a necessary but insufficient basis.

Family 2: Modal distribution — What balance between modes of transport?

Modal distribution (or “modal share”) measures the proportion of trips made with each mode of transport: walking, cycling, public transport, car.

Why it's important: The objective of active mobility policies is not only to increase the number of cyclists in absolute terms, but to change the balance between modes in favor of soft mobility. A 10% increase in the number of cyclists accompanied by a 20% increase in car traffic is not a success.

Examples of indicators:

  • Modal share of cycling in commuting (objective: to go from 5% to 12% in 5 years)
  • Share of trips of less than 3 km made on foot or by bike
  • Evolution of the modal share of the private car

How to produce them: Mobility surveys (EMD, EMC²), counting data crossed with car traffic data, regular surveys with representative samples.

Limit to know: Mobility surveys are cumbersome and expensive. They are generally carried out every 5-10 years, which does not allow for fine monitoring. It is necessary to supplement with proxies (evolution of bicycle use measured continuously).

Family 3: Evolution — What is the dynamic over time?

Evolution indicators measure trends: progression or regression in attendance, acceleration or slowdown in uses, seasonality.

Why it's important: An active mobility policy is judged by its capacity to sustainably transform practices. A one-off increase in attendance (event, nice weather) means nothing. What matters is the underlying trend.

Examples of indicators:

  • Annual growth rate in cycling (+12% per year on average over 3 years)
  • Comparison of year N vs year N-1 over the same periods (neutralization of weather and seasonal effects)
  • Evolution of traffic on old routes vs recent routes (to measure the network effect)

How to produce them: Continuous counting data over several years, with particular attention to the comparability of periods (compare July N with July N-1, not July N with January N).

Limit to know: A drop in attendance is not always a failure. It can reflect a movement of flows to new axes (network effect). The interpretation should be contextual.

Family 4: Seasonality — What peaks and what troughs?

Seasonality measures the variations in attendance at different times of the year, days of the week and time slots.

Why it's important: Understanding seasonality makes it possible to adapt services (reinforced maintenance in high season, targeted communication during off-peak periods), to detect utility uses (peaks at peak times) and recreational uses (peaks at weekends), and to anticipate needs.

Examples of indicators:

  • Ratio summer attendance to winter attendance (indicates whether the use is tourist or structural)
  • Week/weekend distribution (utility use if 60-70% during the week, recreational if 60-70% at the weekend)
  • Morning and evening hourly peaks (indicator of trips between home and work)

How to produce them: Analysis of automatic counting data with hourly granularity, crossing with weather variables and school calendar.

Limit to know: Seasonality is not a problem in itself. This is a characteristic that must be understood in order to adapt management. A very seasonal use (tourist greenway) requires a different strategy than a stable use all year round (urban pendulum).

Family 5: Impact — What is the effect of the arrangements?

Impact indicators measure the changes produced by mobility policies: modal shift, reduction of emissions, improvement of road safety, improvement of public health.

Why it's important: That is the aim of public action. Creating bike paths is not an objective in itself, it is a way to reduce pollution, improve health, decarbonize transport.

Examples of indicators:

  • Number of car trips avoided thanks to new cycling arrangements (estimate via surveys “How did you come before?”) ”)
  • Estimated reduction in CO₂ emissions linked to modal shift
  • Reduction in cycling accidents after the creation of dedicated infrastructures
  • Increase in weekly sports practice (health surveys)

How to produce them: Crossing several data sources (counts, surveys, traffic data, health data), modeling, before-and-after studies.

Limit to know: Causal attribution is always difficult. A reduction in accidents can result from bicycle improvements, but also from better general road prevention. It is necessary to be careful in the conclusions and to explain the hypotheses.

How to build a readable mobility dashboard

Having lots of indicators is one thing. Organizing them in a way that is legible for decision-makers is another. An effective dashboard respects several principles.

Principle 1: Prioritize indicators

Not all indicators have the same strategic importance. A distinction must be made between:

Strategic indicators (3 to 5 maximum) : These are the key figures that elected officials and directorates-general follow. Examples: modal share of cycling, annual evolution of cycling use, number of km of secure cycling facilities.

Operational management indicators (10 to 15) : These are the metrics that technical services use to adjust their actions on a daily basis. Examples: attendance by axis, hourly distribution, occupancy rate of bicycle parking.

Context indicators (unlimited) : Background data helps to interpret strategic indicators. Examples: weather, local events, work on the road network.

A readable dashboard highlights the strategic indicators (page 1), details the management indicators (following pages) and leaves the context indicators in the annex.

Principle 2: Visualize rather than encrypt

Raw numbers are hard to interpret. Visualizations (graphs, curves, maps) make information immediately intelligible.

Examples of effective visualizations:

  • Monthly attendance curve over 3 years (detects trends)
  • Traffic heat map by axis (identifies hot spots and underused areas)
  • Bar chart of the hourly distribution (distinguishes between utility and recreational use)
  • Modal distribution pie chart (shows the balance between modes)

Golden rule: A decision maker should be able to understand the essentials in 30 seconds of reading the graph, without having to read a textual explanation.

Principle 3: Contextualize the numbers

An isolated number means nothing. Is “500 cyclists per day” a lot or a little? The answer depends on the context.

Three ways to contextualize:

  • Time comparison: 500 cyclists/day in 2025 vs 350 in 2023 (+43%)
  • Spatial comparison: 500 cyclists/day on this axis vs 800 on the neighboring comparable axis
  • Comparison with a lens: 500 cyclists/day vs the goal of 600 set in the bike plan (-17% compared to the target)

Each indicator should be accompanied by at least one of these three context elements.

Principle 4: Update regularly but not excessively

A dashboard that is updated every week creates noise more than information. A dashboard that is updated once a year is too late to allow for adjustments.

Recommended pace according to the type of indicator:

  • Strategic indicators: quarterly or semi-annual update
  • Operational management indicators: monthly update
  • Context indicators: consultation on demand

This pace makes it possible to detect trends without drowning in short-term variations.

Common mistakes in choosing indicators

Even well-intentioned, local authorities often make the same mistakes in building their mobility indicator systems.

Mistake 1: Measure only what is easy to measure

The kilometers of bike paths created are easy to measure (maps, GIS). Effective use of these trails is more difficult (requires sensors). As a result, many communities stop for miles and never measure usage.

Consequence: We pilot on the means (“we created X km”) without knowing if these means produce the expected results (“Y cyclists use them”).

Good practice: Investing in attendance measurement devices, even modest ones (a few sensors on strategic axes), to complement resource indicators with result indicators.

Mistake 2: Multiplying indicators without prioritizing them

Some mobility dashboards include 50 indicators regardless of priority. The result: decision-makers are drowned in information and no longer know what to look for.

Consequence: The dashboard becomes a formal exercise (“we produce numbers”) with no effect on decision-making.

Good practice: Limit strategic indicators to 3-5, organize the others by level of detail, and build a summary page that fits on one screen.

Mistake 3: Comparing Non-Comparable Data

Comparing the use of an urban bike path with that of a rural greenway makes no sense. The contexts, the user profiles, the functions are incomparable.

Consequence: Erroneous conclusions (“our greenway is underused compared to the urban track”) when the two infrastructures perform different functions.

Good practice: Compare only infrastructures of the same nature, in similar contexts. Or clearly explain differences in context to avoid simplistic interpretations.

Mistake 4: Not combining quantitative and qualitative

A purely quantitative system of indicators (number of cyclists, kilometers travelled, CO₂ reduction) misses an essential part of reality: user satisfaction, barriers to use, conflicts of use.

Consequence: A policy can show good numbers while generating dissatisfaction (busy bike paths but perceived as dangerous, saturated greenways at peak times).

Good practice: Complete quantitative indicators with regular satisfaction surveys (every 2 years), perception barometers, and qualitative interviews with typical users.

Quantitative indicators + user feedback = robust management

An effective mobility indicator system is never purely numerical. It combines:

Objective quantitative data (measured attendance, distances travelled, changes over time) that make it possible to monitor trends, compare situations and produce factual reports.

Qualitative feedback from users (satisfaction, obstacles, suggestions, points of tension) that make it possible to understand behaviors, identify problems that are not visible in the figures and to anticipate changes.

A contextual analysis which intersects the two sources and avoids mechanical interpretations. A drop in attendance can be a problem (infrastructure that no longer meets needs) or a normal evolution (transfer to a new, more efficient route). Only contextual analysis can make a decision.

Example of integrated control:

A community is observing a stagnation in cycling traffic despite significant investments.

  • Quantitative reading only: “The facilities are not working.”
  • Cross-reading with user surveys: Cyclists say that the facilities are good but that route cuts (unsecured junctions, discontinuities) discourage them. The problem is not the quality of the sections, it is the continuity of the network.

The action to be taken is changing radically: rather than creating new sections, it is first necessary to secure the existing cut-off points.

Conclusion: pilot, not just measure

Indicators are not an end in themselves. They are only valid if they allow better decisions to be made: where to invest first, which arrangements produce the best results, what uses are emerging and must be supported, what points of friction must be corrected.

A good mobility indicator system meets three requirements:

  1. Balance between resources and results : not only measure the action, but also the impact
  2. Clear prioritization : distinguish strategic indicators (for decision-makers) from operational indicators (for services)
  3. Quantitative/qualitative cross : complete the figures with real user feedback

The communities that build these robust indicator systems are finding that they are transforming the way they manage their mobility policies. They go beyond visual control to enter data management — which does not guarantee never to make mistakes, but significantly increases the probability of making good decisions.

Our dedicated theme page
Public spaces
icone signal

Customer stories

icone roue crantée

Practical guides