Approaches used

The Steering Group may design an interim priority setting approach that they feel is appropriate for the communities they are targeting.

It is important to consider the respondents (and their possible health condition) in choosing the method, and to think about what it is reasonable to ask them to do.

Given the varied needs of participants, the JLA does not impose a strict method for this stage. It does however ask PSPs to note a detailed, transparent explanation of how they conducted the interim prioritisation and how rankings were agreed. It may be necessary to offer an alternative to returns by email, such as phoning in ranked uncertainties or postal returns.

Approaches previously used by JLA PSPs include:

  • Asking people to choose the 10 most important questions in their experience
  • Asking people to choose the 10 most important questions and rank them 1-10.

Examples of these are below:

Choose and rank 10
  • Participants are asked to consider the long list of questions, and then to choose and rank 10 of them.
  • This can be done via email and post, using a pro forma produced in Word, or online. 
  • Each ranked question is given a score (rank 1 = 10 points, rank 10 = 1 point) and totals are tallied for each question, keeping patient, carer and clinician responses separate.
  • A rank order for each respondent group is calculated, and each question re-scored according to its position in the list (top ranking gains maximum points).  The totals for each respondent group are added together to generate a combined ranking of all the questions.
  • Participants have to make choices about the questions and enter into a process of priority setting, producing a genuine set of priorities. 
  • The ranking materials can be produced easily and cost-effectively.
  • Asking respondents to rank their 10 questions produces a slightly clearer and more nuanced result, with a lower risk of questions being ranked in joint place, particularly if the interim survey includes a smaller number of questions.
  • It also gets respondents into the frame of mind of ranking and choosing a top 10 in the final workshop.
  • When carried out via email/post, can potentially generate a lot of data that needs to be manually entered into a spreadsheet. 
  • Not all survey software allows for questions to be chosen then ranked.  Alternative or upgraded software may be needed to do the exercise online, taking care not to create a page of questions that is overly-long or difficult to navigate. 
See an example of the interim survey ranking form in the Key Documents section of the Childhood Disability PSP and the Type 2 Diabetes PSP on the JLA website.


Choose 10
  • Participants are given the long list of questions.  They are then asked to choose 10, but not rank them. 
  • This can be done using email/post, or online. 
  • Each time a question is chosen, it is given one point.  Separate tallies should be maintained for the different stakeholder groups, so the totals for each one are equally weighted when added together.
  • Participants have an opportunity to consider the whole list, but must still make choices that involve them in genuine shortlisting.
  • May be suited to groups that find it hard to rank topics individually, for whom simply choosing 10 would be sufficiently challenging. 
  • May also be useful for those PSPs where the number of questions sent for interim prioritisation is towards the upper end.
  • When carried out via email/post, can potentially generate a lot of data that needs to be manually entered into a spreadsheet. 
  • When done online, can result in a very long list that may be hard to digest. 
  • Not asking participants to rank their choices may result in more questions being in joint place, particularly when an interim survey includes a lower number of questions.
See an example of the interim survey in the Key Documents section of the Anaesthesia and Perioperative Care PSP and the Adult Social Work PSP on the JLA website.

In the past, a small number of PSPs have used a Likert scale for respondents to rate the importance for each question. The JLA does not recommend this approach for interim prioritisation as the results tend to generate very small differences in scores between the questions. This does not provide a meaningful indication of the relative importance of the different questions for different groups, making prioritisation very challenging.