What is Benchmarking?

Bonfire Benchmarking is a place for Procurement Professionals to leverage the vast quantity of public procurement data to understand typical project metrics for various project categories. Procurement Professionals can quickly access relevant metrics calculated for the specific category, and in turn provide valuable insight to their budget owners, plan, create, and manage projects effectively, utilizing the collective community knowledge from Bonfire’s large customer base.  

The Value of Benchmarking in Bonfire

  • Benchmarking adds confidence and helps you get on the right track and moving faster by leveraging templates and data from peer organizations.
  • You don’t need to run scenario analysis AFTER you’ve already posted the criteria, that’s too late! Bonfire Benchmarking allows you to reference common project setup BEFORE you draft, while you still can make key changes to drive better decisions
  • Are you a junior buyer who’s never touched a specific category before? Don’t worry, our metrics and templates will provide you with insight on what to expect when building and running your projects in any category.

Please note: Benchmarks utilize public sector purchasing data across sectors, and are intended to showcase general patterns used in purchasing categories. Your organization may have other standards of practice that may take precedence over aggregated metrics shown.

Navigating to Benchmarks  

Benchmarking guides are triggered by the Benchmarking button on the bottom right corner of your screen on the following 3 pages:


  • Project Details page (Projects Module)
  • Project Draft page (Projects Module)
  • Request Details page (Intake Module)

Once you've selected a project and loaded its Project Details or Projects Drafts page (In the Projects Module), you will see the Benchmark button appear in the bottom right-hand corner. If you click on the button you will be taken to the Category Selection view, where you can navigate a set of intuitive categories to find benchmarks for a specific category.



Similarly, if you are viewing the Request Details (In the Intake module) the Benchmark button will appear.  Clicking on the button will take you directly to the category that matches your intake request. This matching is based on how your requester has self-identified their project. You can always override this category selection by editing the request. Once at the corresponding benchmark view, you are always able to navigate to see benchmarks of other categories.



Within the Benchmark view, you can further choose to view benchmarks for RFP projects only, or for all other project types. In our experience, RFP projects go through a more thorough evaluation process compared to other project types such as price-only bids, hence it is singled out from other project types to show its own benchmark assessments. As usage for other project types become more consistent across organizations, Bonfire will be able to apply other project attributes to further segment benchmarking data.


Interpreting the Benchmarks

There are 4 different benchmarks:

Each title contains the key take away you can use to make decisions in your project, while each graph contains evidence to support the takeaway 

Project Timespans

In Bonfire, a typical project goes through 3 stages/statuses:

  • Open - when the project is publicly posted
  • Evaluation - when the project is closed for submissions and evaluators can review and score various vendor proposals
  • Completed - after the evaluation is completed, often a winner is chosen and an award is made.


This Benchmark represents the typical ranges of days for each of the above project stages. The combined range is also shown. 

  • The total range is calculated by taking the average of whole project durations between the date a project is publicly posted, to the date when a project is awarded.
    • It is not the same as adding the lowest ranges and the highest ranges together. For example, a project that was open for 24 days could take 25 days in the Evaluation phase, then 7 days in the Awarding phase, to arrive at a total of 56 days. It falls in the range of Majority of Projects
  • The median number of days (♢ diamond symbol) indicates the most common number of days that the project is in that stage 

Use Cases

  • You can leverage this data to set expectations with your internal customers. 
    • Procurement Professionals often are questioned about why a project needs to take so long. 
    • Budget owners often are eager to make the purchase as soon as possible. 
    • By showcasing typical project durations at public agencies for a specific category of projects, you bring value to your stakeholders, using data and visualization to inform them how long a typical project would take.
    • Provides stakeholders with confidence with your project time management. 
  • If your project falls within the range for the total range or the listed range for each phase, this means that your project duration is between the 25th and 75th percentile of all projects. 25% of projects are shorter than yours, and 25% are longer.
  • There may be very valid reasons for your project to fall outside of the typical range:
    • If you have insufficient submissions, or if your state/province requires a longer project posting date, your Open phase can take longer.
    • If you have a large number of evaluators, or more evaluation groups in a project, the Evaluation phase may take longer.
    • If you encounter detailed negotiations or negotiations with multiple vendors, your Awarding stage could be longer. 


Projects in Bonfire can utilize numerical scoring to evaluate vendor proposals. Scores are either applied to Pricing related criteria or Qualitative criteria.


This Benchmark helps to answer the key questions when structuring your projects: How much should pricing weight? What qualitative criteria should I use?

In any given category, you will find some projects that do not use Pricing related criteria, while others do. In this example category, 76% of all projects use Pricing related criteria. For these projects, this Criteria Benchmark presents 3 key pieces of information:

  1. The most common number of points given to Pricing Criteria (for example, 35 out of 100 points)
  2. The distribution of Pricing Weight
    1. X-axis (bottom of the graph) represents pricing weight buckets, from 1-10/100 points to 91-100/100 points
    2. Bars represent the % of projects that used Pricing weights in that bucket. The higher the bars, the more frequently projects used Pricing weights in the associated buckets. For example, 34% of projects with pricing criteria, weighted this criteria with 31-40 / 100 points.
    3. Lower peaks indicate that there is fewer data points (for example, the least common value for Pricing Criteria points is 70 and 90, which both account for 0% of data individually)
    4. This Benchmark calls out the most common pricing weight used (35 / 100 points), and correspondingly, the most common weight used for all of the qualitative criteria (65 / 100 points).
  3. Common keywords are presented in a word-cloud. The bigger the key-word, the more frequently it appears as the title of the project criteria.

Use Cases

  • We frequently hear from our customers who wonder about what’s the best pricing weight to use, so as to reduce cost on a project while balancing the need to guarantee quality with non-pricing requirements. Please note that pricing can play a different role depending on the project category.
    • Example: Non-RFPs tend to place value solely on pricing (majority of projects weighed pricing at 100 / 100 points)
  • With this benchmark, you now have supporting data that tells you how other public organizations are evaluating similar projects, and what are some typical pricing weights to set.
  • If your proposed pricing weight appears to be more frequently used by others, you can proceed with your criteria setup with more confidence.
  • However, you may have very legitimate reasons for deviating from typical pricing weights. Understanding the unique needs of your project will help you create unique project structures to satisfy your stakeholders. 

Vendor Submissions

With Bonfire, your vendors can have many types of interactions before finally submitting a proposal for a project.

  • Document Takers: Vendors who are interested can start by downloading associated documents for viewing. Buyers typically use this as an indication of Vendor interest in projects. Note that not every vendor that download project documents will convert to a vendor that submits proposals.
  • Submission Started: Vendors can start a submission and save unfinished work until they are ready to submit. Buyers typically reach out to vendors who have started a submission process but are unable to submit, to understand why.
  • Submissions Completed: Vendors complete their submissions as the very last step. 


This Benchmark presents the Vendor Submission funnel, to help you better manage the vendor pipeline.

  • Each node in this benchmark is a stage in the vendor submission funnel (i.e. Vendors Invited, Document Taker, etc.)
  • Numbers in each stage of the pipeline represent the average for your chosen category.
  • The benchmark title interprets the ratio between Vendors Invited and Submissions Completed, to estimate how many vendors you should expect to invite to achieve 5 completed submissions.
    • We recommend 4-5 submissions to introduce healthy competition in any project.

Use Cases

    • Different purchase categories will see different participation rates. With this benchmark, you are guided with the public sector average for any given purchase category. You can target a number of invitations (How to invite vendors to a project) that results in an ideal number of submissions for your project
      • Note that Bonfire benchmarks are currently cross-organization. Your own organization’s historical trends may be different.
    • You should still monitor vendor engagement during an individual project’s Open phase to achieve competitive vendor participation.
    • You might also want to reach out to vendors who start preparing submissions but don’t complete them, to discover if there are any systematic blockers you can fix.
    • If you have too few vendors, your project isn’t as competitive. If you have too many, it would be a pain for your evaluators.

Requested Information  

This Benchmark helps to show what are the common attributes when formulating Requested Information. 


In this example, there are 6 pieces of Requested Information

The benchmark then breaks down each Requested Information type. 

    • For Documents and Data Fields, the most common number of slots are shown, as well as additional details like keywords, and document length
    • For BidTables and Questionnaires, frequency of usage is shown across all projects in this category, as well as additional details such as the average number of items and questions, respectively   

Use Cases

  • As buyers design a project, it’s important to build the right structural complexity, so that vendors are not overburdened in providing the right information in their proposals.
  • Understanding how other public organizations structure similar projects will help you to right-size project requirements.
  • Seeing the frequency of use for tools such as Questionnaires and BidTables will guide you in how to optimize the structure of your project.

Similar Projects 

How often do you ask colleagues for an example of a specific type of project or rolled the dice with google to find a similar project? Through Benchmarking, Bonfire has hand-curated successful projects run by your colleagues across North America. Similar Projects will allow you to review a similar opportunity from the category you select and allow you to get a better idea on how to outline your scope and determine additional areas of consideration for your RFx. 


Similar Projects will be available at the bottom of each benchmark for the chosen category and project type. Buyers can click the project name and download the relevant bid documents.

Use Cases

  • The Similar Project samples can help color the creation of your project scope
  • Provide a sample of what other organizations are doing for similar projects
  • These similar documents should not be utilized for legal language, as this varies from jurisdiction to jurisdiction.
  • Provides specific requirements and criteria utilized.


Don't forget to keep in mind that Benchmarks utilize public sector purchasing data across sectors, and are intended to showcase general patterns used in purchasing categories. Your organization may have other standards of practice and should take precedence over aggregated metrics shown.


Have more questions?

Please reach out to your designated Client Success Manager or our Support team at



Was this article helpful?

0 out of 0 found this helpful
Have more questions? Submit a request

Comments (0 comments)

Article is closed for comments.