Common use of EVALUATION OF BIDS’ PROJECT VIABILITY Clause in Contracts

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scores. • The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. This suggests that the Calculator is still a crude screening tool with a lot of noise in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because the difference falls within the error of the analysis. • As evidenced by feedback from Participants, developers in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what the scoring guidelines in the Calculator state. • Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. • Some of the Offers were scored low simply because the Participants omitted basic information about their projects, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 4 contracts

Samples: www.pge.com, www.pge.com, www.pge.com

AutoNDA by SimpleDocs

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than prior methods that use single-digit scores. • The methodology allows PG&E the flexibility to use both the more standardized tool as well as its subjective business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. For example, when an offer for a full-capacity project requires delivery network upgrades estimated to take several years beyond the proposed on-line date to complete, one scorer might assign zero points to “Reasonableness of COD” by observing that the PPA cannot deliver the proposed product on time, and another might assign 10 points, observing that the project can likely start energy-only deliveries on time. When the scores rated by Xxxxxx and the PG&E team for the 2012 RPS RFO were compared, the variance between scores had a standard deviation of 12 9 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. This suggests that the Calculator is still a somewhat crude screening tool with a lot of noise in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because the difference falls within the error of the analysis. PG&E took this characteristic of the Calculator into account when using the tool. As evidenced by feedback from Participants, developers in general Some Participants appear to have a poor understanding of how the utility interprets the scoring guidelines. Many developersAlso, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what the scoring guidelines in the Calculator state. • Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. • Some of the Offers were scored low simply because the Participants omitted basic information about their projects, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed choose to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven score their proposals in qualitygrossly inflated ways that overstate the Offer’s viability beyond any reasonable measure. While the PG&E team agreed with Xxxxxx believes this renders the self-scored Calculator scores for about a quarter Calculators submitted with offer packages too unreliable to use without review and correction, despite the fact that many or most Participants appear to fill out the form accurately. PG&E’s public solicitation protocol states that the utility “will evaluate the project viability of Offers, on average PG&E gave each offer” using the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency currently adopted version of the assessments. In some cases factors not assessed by CPUC’s Project Viability Calculator, and that “PG&E will review all submissions and adjust self-scores as appropriate.” Similarly, PG&E’s presentation in its Participants’ Webinar indicated that “All offers will be scored” using the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening toolCalculator.

Appears in 3 contracts

Samples: www.pge.com, www.pge.com, www.pge.com

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers proposals has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scoresprojects. • The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the The scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. 13 points.15 This suggests that the Calculator is still a crude screening tool with a lot of noise imprecision in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because ; the difference falls within the error of the analysis. • As evidenced by feedback from Participants, developers There is a future opportunity for the individual scorers within the PG&E team to achieve greater consistency in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what they interpret the scoring guidelines as the team gains greater experience in using the Calculator. • Xxxxxx does not regard some of the criteria in the Calculator stateas providing particular insight into the likelihood of successful project completion. For example, the score for Transmission Requirements depends simply on when access is expected, and not on the degree of difficulty anticipated for achieving the upgrades required to provide access while maintaining grid reliability and achieving deliverability for the project. Xxxxxx would view a project that depends on a two billion-dollar transmission upgrade requiring the acquisition and permitting of dozens of miles of right-of-way as more risky with respect to schedule than one that requires an upgrade to a single distribution substation, even if they have the same proposed timing for access. • Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. • Some of the Offers proposals were scored low simply because the Participants counterparties omitted basic information about their projectsinformation, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 3 contracts

Samples: www.pge.com, www.pge.com, www.pge.com

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers proposals has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scoresprojects. • The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the The scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. 13 points.16 This suggests that the Calculator is still a crude screening tool with a lot of noise imprecision in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because ; the difference falls within the error of the analysis. • As evidenced by feedback from Participants, developers There is a future opportunity for the individual scorers within the PG&E team to achieve greater consistency in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what they interpret the scoring guidelines as the team gains greater experience in using the Calculator. • Xxxxxx does not regard some of the criteria in the Calculator stateas providing particular insight into the likelihood of successful project completion. For example, the score for Transmission Requirements depends simply on when access is expected, and not on the degree of difficulty anticipated for achieving the upgrades required to provide access while maintaining grid reliability and achieving deliverability for the project. Xxxxxx would view a project that depends on a two billion-dollar transmission upgrade requiring the acquisition and permitting of dozens of miles of right-of-way as more risky with respect to schedule than one that requires an upgrade to a single distribution substation, even if they have the same proposed timing for access. • Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. • Some of the Offers proposals were scored low simply because the Participants counterparties omitted basic information about their projectsinformation, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 2 contracts

Samples: Purchase Agreement, www.pge.com

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scoresprojects. • The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the The scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. 13 points.17 This suggests that the Calculator is still a crude screening tool with a lot of noise in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, other because the difference falls within the error of the analysis. • As evidenced by feedback from Participants, developers There is a future opportunity for the individual scorers within the PG&E team to achieve greater consistency in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what they interpret the scoring guidelines as the team gains greater experience in using the Calculator. • Xxxxxx does not regard some of the criteria in the Calculator stateas providing particular insight into the likelihood of successful project completion. • Some scoring criteria For example, the score for Transmission Requirements depends simply on when access is expected, and not on the degree of difficulty or cost anticipated for achieving the network upgrades required to provide transmission access while maintaining grid reliability and achieving deliverability for the project. For example, Xxxxxx would be difficult for view a layperson project that depends on a half-billion-dollar transmission upgrade requiring the acquisition and permitting of dozens of miles of right-of-way as more risky with respect to interpret, such as the Transmission System Upgrade Requirements criterion schedule than one that requires some basic knowledge of what components of an upgrade require or don’t require to a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in single distribution substation, even if they have the regulatory landscape same proposed timing for new transmission build in Californiaaccess. • Some of the Offers were scored low simply because the Participants omitted basic information about their projects, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 1 contract

Samples: www.pge.com

AutoNDA by SimpleDocs

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers has brought several advantages: • The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. • The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. • The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scores. • The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. • Some of the scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. This suggests that the Calculator is still a crude screening tool with a lot of noise in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because the difference falls within the error of the analysis. • As evidenced by feedback from Participants, developers in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what the scoring guidelines in the Calculator state. • Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. • Some of the Offers were scored low simply because the Participants omitted basic information about their projects, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x Arroyo’s opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 1 contract

Samples: www.pge.com

EVALUATION OF BIDS’ PROJECT VIABILITY. The implementation of the Project Viability Calculator as a screening tool for use in the evaluation of Offers has brought several advantages: The Calculator is a step in the direction of more standardized evaluation of viability across all three IOUs. The Calculator provides a broader set of criteria by which projects are assessed than was the case with PG&E’s prior approach to scoring viability. The range of scores from zero to 100 gives more visibility to differences between projects than methods that use single-digit scores. The methodology allows PG&E to use both the more standardized tool as well as business judgment in taking project characteristics into account when making short list decisions. There are still opportunities to improve the use of the Calculator. Some of the scoring guidelines for the Calculator are sufficiently ambiguous that reasonable individuals scoring the same project can arrive at different results. When the scores rated by Xxxxxx and the PG&E team were compared, the variance between scores had a standard deviation of 12 points. Even among individual members of the PG&E team there was a need to review and standardize scoring to reduce discrepancies between individuals’ practices. This suggests that the Calculator is still a crude screening tool with a lot of noise in the scoring process, and that differences of only two or three points between projects should not be regarded as determinative in selecting one and rejecting the other, because the difference falls within the error of the analysis. As evidenced by feedback from Participants, developers in general have a poor understanding of how the utility interprets the scoring guidelines. Many developers, for example, claimed not understand that their project cannot obtain a score of 10 out of 10 for project development experience if their team has never brought at least two projects of equal or larger size with similar technology into operation…even though that is explicitly what the scoring guidelines in the Calculator state. Some scoring criteria would be difficult for a layperson to interpret, such as the Transmission System Upgrade Requirements criterion that requires some basic knowledge of what components of an upgrade require or don’t require a CPUC Permit to Construct of Notice of Construction. Many or most developers lack on- staff experts in the regulatory landscape for new transmission build in California. Some of the Offers were scored low simply because the Participants omitted basic information about their projects, even though upon debriefing it became clear that full disclosure would have resulted in a higher viability score. It is unclear to Xxxxxx how this could be improved in the future, since the solicitation materials clearly stated what information was required. In Xxxxxx’x opinion, PG&E reasonably measured the viability of every project that submitted a conforming proposal for bundled energy, out-of-state power attached to renewable energy credits, or biogas. The evaluation team did not use the Calculator to evaluate Offers for RECs only or sites for development; some Participants for the former did not submit data needed to evaluate their viability, and proposals of land sales or leases are not amenable for scoring as power projects with the information requested or supplied. The Participants’ self-scoring was uneven in quality. While the PG&E team agreed with the self-scored Calculator scores for about a quarter of Offers, on average PG&E gave the Participant-estimated scores a “haircut” of eleven points. This is somewhat distorted by a few developers who scored their own projects by more than 40 points higher than the PG&E team; Xxxxxx agreed with PG&E that these projects had been assigned grossly inflated scores by any objective standard. PG&E conducted conformance checks of viability assessments for Offers, in part to ensure quality control and consistency in how multiple scorers applied the scoring guidelines. Particular attention was paid to Offers that were considered for short listing in early drafts, in order to confirm the quality and consistency of the assessments. In some cases factors not assessed by the Calculator were taken into consideration when the PG&E team made selections; this is consistent with the direction provided by the CPUC about the use of the Calculator as a screening tool.

Appears in 1 contract

Samples: www.pge.com

Time is Money Join Law Insider Premium to draft better contracts faster.