Common use of INDEPENDENT OFFER ANALYSES Clause in Contracts

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation process. The two sets of valuations generally correlated well, with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuation, $/MWh Xxxxxx did not use its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control to identify errors in PG&E’s or the IE’s inputs, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling adders. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division version of the Project Viability Calculator. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness of the tool and the subjectivity of its use. PG&E viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identified.

Appears in 5 contracts

Samples: Power Purchase Agreement, Power Purchase Agreement, Power Purchase Agreement

AutoNDA by SimpleDocs

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation processanalysis. The two sets of Xxxxxx’x valuations generally correlated wellwell with PG&E’s Net Market Value analysis for many Offers, but with a fair amount of noise in the comparison, as shown in Figure 3 9 that compares the two sets of valuations. The mediocre quality of the correlation is less interesting than the outliers and the underlying reasons for some of the divergences: PG&E LCBF valuationvaluation of Net Market Value • PG&E assigned a higher value to new projects interconnecting in non-CAISO balancing authority areas because no transmission adders are applied; Xxxxxx estimates an adder for network upgrades for these projects. This is most clearly seen in the two shortlisted projects interconnecting into IID’s grid. • PG&E assigned network upgrade costs to projects for an interconnection even if the developer reports that the costs will be borne by another project using a share of the interconnection capacity, $/MWh on the logic that the costs should still be allocated to the project making an Offer. • Some scatter is due to the difference in discount rates applied to future years’ cash flows; PG&E uses its own authorized weighted cost of capital as a regulated utility, Xxxxxx did not use uses a higher estimate of merchant generators’ cost of capital. The adjustments have a considerable impact on the value rankings of Offers. Figure 10 shows a plot of Offers’ NMV vs. PAV, showing visually how for some Offers the ' adjustments can reduce the PAV by as much as ranking. , substantially altering their Overall, if Xxxxxx had used its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control valuation and viability scores to identify errors high-value candidates for selection, more Offers in SP-15 would have been chosen, including more existing geothermal and wind projects. Fewer Offers in NP-15 would have been chosen'' , and projects that Xxxxxx scored below median for project viability would have been rejected, . This simply reflects the strength of PG&E’s or the IE’s inputspreference for projects in its own service territory, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low its disinterest in counting IID network upgrade costs that do not directly affect PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offerrates, and the size of TRCR or transmission wheeling addersits greater willingness to select lower-viability proposals. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division final version of the 2011 Project Viability Calculator. This was useful , anticipating a later need to get an estimate rank projects that obtain executed PPAs against a peer group made up of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness of the tool and the subjectivity of its use. PG&E viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identifiedall RFO proposals.

Appears in 1 contract

Samples: Purchase and Sale Agreement

AutoNDA by SimpleDocs

INDEPENDENT OFFER ANALYSES. Xxxxxx conducted its own rather simplified valuation process. The two sets of valuations generally correlated well, with a fair amount of noise in the comparison, as shown in Figure 3 that compares the two sets of valuations. LCBF valuation, $/MWh Xxxxxx did not use its simplified model to construct a separate short list. Instead, the simplified model was useful in quality control to identify errors in PG&E’s or the IE’s inputs, parameters, or assumptions for specific Offers. Also, the comparison helped identify what specific factors caused specific Offers to be ranked high or low in PG&E’s short-listing process, such as the impact of the discount rate assumption, the on-line date, the choice of which transmission cluster to assign to an Offer, and the size of TRCR or transmission wheeling adders. Xxxxxx also scored each Offer for viability independently of PG&E’s analysis, using the original Energy Division Division’s version of the Project Viability CalculatorCalculator and not PG&E’s modified version. This was useful to get an estimate of what the standard error of the Calculator is, and a sense of whether differences in score reflect significant differences in the viability of projects or are within the noise of the method for assessing viability. Xxxxxx emerged from the comparison (shown in Figure 4) with a view that differences of a dozen or fewer points in viability score may not reflect true differences in the likelihood that one project is significantly likelier than another to achieve successful completion, given the roughness modest precision of the tool and the subjectivity of its use. PG&E viability score The correlation of the IE and PG&E team’s scores using the Project Viability Calculator is poorer than that between valuation models. Xxxxxx ascribes this to the gray areas in the scoring guidelines, to differences in the subjective judgments of individual scorers, and to PG&E’s use of an additional evaluation criterion in its modified Calculator. The comparison between the sets of scores helped reveal specific errors that Xxxxxx acknowledged in its draft scores and corrected, but no doubt there are other errors in Xxxxxx’x viability scoring that have not yet been identified.

Appears in 1 contract

Samples: Power Purchase Agreement

Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!