Sourcing-driven evaluations

Sourcing conducts a variety of supplier evaluations. Those can be broadly divided into:

  • Supplier intelligence and comparative analyses
    • Supplier capabilities RFIs
    • Comparative rating/ranking of capabilities RFIs
    • Segmentation of capabilities RFIs for specific skills and comparative analysis in support for specific areas of work or individual potential engagements
    • Collection, normalization, alignment, and comparison of supplier rate cards
  • Proposal analyses
    • Scoring supplier RFP responses—every RFP should include a component for supplier response which can be directly scored, for example, for a package, degree to which functional requirements—mandatory, nice to have, optional—are met.
    • Comparative analyses across supplier responses, both functional/capabilities and financial
    • Stakeholder questionnaires grading RFP responses, stakeholder-supplier interviews, and analysis to guide supplier selection to stakeholder consensus
    • Standardized supplier customer references scripts, common questions and specific to type of engagement, scoring, and comparative analysis
    • Executive summaries of stakeholder decisions, in concert with project manager
  • Supplier performance
    • Customer satisfaction surveys
    • Engagement health supplier (self-score) plus client (score) scorecards
    • Supplier satisfaction surveys
    • Engagement tracking, for example, projected financial benefits

Scoring conventions

This is an area where simplicity is the best approach. Through experience, we have found a simple Fails/Meets/Exceeds (F/M/E) scale works best across all the evaluation areas mentioned above:

  • 5 = Greatly exceeds requirements/expectations
  • 4 = Exceeds requirements/expectations
  • 3 = Meets requirements/expectations—we look for a minimum "3.5" overall for supplier performance
  • 2 = Marginally meets requirements/expectations
  • 1 = Minimally meets requirements/expectations
  • 0 = No demonstrated competency, or response not provided

We strongly recommend against "weighting" either individual items or categories to affect scoring. The presumption is that weighting can (rightly) change supplier rankings based on what the client considers most important. In our experience—including running numerous analyses with and without weightings:

  • Weighting anything sufficiently prioritized to materially affect rankings effectively makes that the sole determining factor. "Reasonable" weightings, such as 2x or 3x multipliers, have virtually no effect on final rankings because a superior response will always prove superior.
  • Weighting "takes the eye off the prize." There is no better tool to sow management discord than to collect a group of stakeholders in a room to then argue over what is more important.
  • Moreover, at the point at which stakeholders are providing their evaluative responses at the culmination of the selection process, the role of sourcing is to guide stakeholders not only to consensus but to unanimity. There is no more powerful accelerator toward engagement failure than lingering post-decision stakeholder dissent. Weighting only clouds the discussion.
  • Lastly, if "reasonable" weightings do create material changes in rankings, then you have not collected enough data points to make an informed decision.

Site contents copyright © 2024, P. Vecrumba. All Rights Reserved. Wikipedia™ and external site links are provided for convenience and do not constitute endorsement of, affiliation with, or responsibility for such content. Please Email us at contact@itsourcers.com with comments, questions, suggestions, or issues.

We do not use cookies and do not track users.