Sponsors and CROs in a Functional Service Provider (FSP) model share the same objectives; to deliver the clinical study safely, effectively and efficiently. Whilst the ways in which partnerships track performance and achievement of these goals can vary across organisations, a constant fundamental assumption is that key performance indicators (KPIs) should be selected, designed and articulated so that resulting outcomes support the achievement of the sponsor’s overarching business strategy. Here, we share some best practices for KPI creation and management to ensure success in your program. 

Selecting the right KPIs

In our experience, the KPIs which are most successful in driving the “right performance” are those that align both with KPIs measured internally by the Sponsor and the partnership’s objectives. This is especially true at the beginning of an FSP partnership. Orienting the teams around meaningful, shared KPIs facilitates an ease of understanding, ensures buy-in and fosters deeper collaboration across the organisations while allowing for discovery of partnership insights.

In terms of the number of KPIs, we have found that articulating and overseeing multiple performance measures does not aid transparency and can be a drain on much needed resources. We suggest sticking to those measures that directly contribute to the desired objectives, and often between six and ten measures are likely to be key for most partnerships.

In the early stages of the model, KPIs may be more tactical and focused on measures designed to understand certain logistical aspects, for example Assurance of Supply (recruitment), Operational Excellence and Delivery. Whilst we suggest committing to tracking and managing the same KPIs for about a year to create consistency in data and reporting, it is important to continuously reflect on whether those KPIs remain relevant over time. Strategies and objectives are dynamic and will change, making it inappropriate to continue to report on the same KPIs indefinitely. Equally, more data may become available that provides a deeper understanding of the workings of the program and this should result in different or additional KPIs becoming applicable.

 

Successfully measuring KPIs

Once the number and type of indicators are agreed, it is important to ensure they can be measured to determine successful performance and progress (or regression) toward achieving the overall program goals. To ensure your indicators will provide robust data for performance analysis, targets should follow the SMART formula:

  • Specific: There should be one accepted definition of the KPI, and it should be highly visible to the relevant stakeholders.
  • Measurable: The KPI should be easily quantifiable to state the actual performance versus target performance, e.g., voluntary turnover.
  • Achievable: The target should be challenging but achievable. Sought after performance standards should be realistic, even if challenging.
  • Relevant: Is the measure practical and pragmatic? A KPI provides more insight into performance if it is consistent with the current program context and can only be only relevant if our processes and people can influence it.
  • Time framed: Values of KPIs should be expressed in a specified time frame. We suggest targets be relatively short-term measurements to ensure the appropriate tracking of program performance over the longer term.

 

Ensuring data quality for KPIs

Data quality is inherently tied to performance measurement. KPIs are only as good as the reliability of the data associated with them, and so a data strategy covering the following elements should be well defined from the outset:

Data source: Where does the data originate? Be as specific as possible in identifying sources, including mentioning specific directories or file names if necessary.

Data extraction moment: At which point in time is the data is fed into the KPI? Note the exact extraction moment as closely as possible, for example at noon CET or the first day of every week.

Data update frequency: How often is the data refreshed per period? Determine if the data is real-time, weekly or bi-weekly, etc.

Data supplier: Which individual or department is responsible for delivering the source data?

KPI owner: Which individual is ultimately responsible for KPI availability and accuracy? A named owner encourages accountability and responsibility.

 

Data reporting and visualisations

Data reporting is critical as it is the first step toward decision-making and acting on this information in a way that improves performance and drives better outcomes. A clear strategy should include how often and to whom data will be reported and should account for the different reporting needs of each measure and responsible team.

Below is a graphical representation of how data can flow within a program governance structure: 

KPI data flow within program governance structure
Figure 1: KPI data flow within program governance structure

Data visualisations can also give life to goals and make progress tangible for teams. Some of our partners have availed of the services of our ICON Data Analytics team to create a dynamic, balanced scorecard. While no measurement is perfect or complete, creating meaningful visual data comparisons enables deeper interpretation and better decisions by teams. In addition, visualising performance over time enables us to identify trends which provides useful context in assessing underlying progress/successes. 

Balanced scorecard and examples of KPIs
Figure 2: Balanced scorecard and examples of KPIs

 

Responsive KPIs for dynamic partnerships

These best practices for KPI management will help guide robust partnership controls to improve efficiencies. If planned, managed and expressed well, KPIs tell us how well the program is operating across different areas and can inform timely decision-making processes to direct appropriate actions. In so doing, we drive partnership growth and ensure performance alignment with our sponsor’s overarching objectives. 

Authored by: Triona Price Smith

Vice President, Program Management