Evaluation Methodology Sample Clauses

Evaluation Methodology. 5.1. The Evaluator shall use a documented and evidence-based methodology which meets the requirements of ISO/IEC 19011, or equivalent. This shall include adequate checks of relevant documentation, operating procedures and records of the operations of the organisations responsible for implementing the TLAS, identification of any cases of non-compliance and system failures, and issuance of requests for corresponding corrective action. 5.2. The Evaluator shall, inter alia: (a) Review the process for accreditation of Independent Assessment and Verification Bodies (LP and LV); (b) Review documented procedures of each body involved in TLAS implementation controls for completeness and coherence; (c) Examine implementation of documented procedures and records, including work practices, during visits to offices, forest harvesting areas, log yards/log ponds, forest checking stations, mill sites and export and import points; (d) Examine information collected by the regulatory and enforcement authorities, LPs and LVs and other bodies identified in the TLAS to verify compliance; (e) Examine data collection by private sector organisations involved in TLAS implementation; (f) Assess the availability of public information set out in Annex IX including the effectiveness of information disclosure mechanisms; (g) Make use of the findings and recommendations of Independent Monitoring and Comprehensive Evaluation reports, as well as reports of the Independent Market Monitor; (h) Seek the views of stakeholders and use information received from stakeholders who are either directly or indirectly involved in the implementation of the TLAS; and (i) Use appropriate sampling and spot check methods to evaluate the work of the forest regulatory agencies, LPs and LVs, industries, and other relevant actors at all levels of forest activities, supply chain control, timber processing and export licensing, including cross checks with information on timber imports from Indonesia provided by the Union.
AutoNDA by SimpleDocs
Evaluation Methodology. The methodology reviewed for the evaluation of the reserves is the JORC (Joint Ore Reserves Committee).
Evaluation Methodology. The evaluation of the bids will be done in a two-stage process. Bidders who do not meet the Stage 1 {(Administrative Compliance Requirements (completion or attachment of Compulsory documents)}, of the evaluation shall not be considered for Stage 2 evaluation (Price and B-BBEE). 1. Declaration of Interest form (ECBD 4) 2. Signing of Declaration of Bidder’s Past Supply Chain Management Practices (ECBD 8);
Evaluation Methodology. In addition to all items highlighted under Page 2 titled “Very Important Notice on Disqualifications”, the tenders will be evaluated in terms of the Municipality Supply Chain Management policy, Preferential Procurement Framework Act (Act 5 of 2000) and its regulations as enacted in 2001. Tenders will be evaluated using the 80/20 points allocation system. The total points out of a possible maximum of will be calculated using various formulae to calculate price as well as for preferential procurement.
Evaluation Methodology. (1) Methodology for Evaluation and Award shall also include price evaluation based on overall lowest evaluated price (L-1) basis. (2) The evaluated price of bidders shall include the following: (i) Ex-works price quoted by the bidder (including packing, forwarding, and GST on components and raw materials but excluding Inland Transportation to Delivery Location) including cost of Inspection by Third Party Agency, mandatory spares etc. (wherever applicable). (ii) Inland transportation up to Delivery location and other costs incidental to delivery of goods (iii) GST (CGST & SGST/UTGST or IGST) on the finished goods including inland transportation (i.e. on sl. no. i and ii above)
Evaluation Methodology. 6.1 Applications will be subjected to the minimum documents submitted and complete as listed in Section 5 to determine which service providers are compliant or non-compliant. 6.2 Only those applications that are fully complaint and whose submissions have been verified will be listed on the database and be eligible to submit quotations for programmes or services as the need arises.
Evaluation Methodology. Method to evaluate compliance with the obligations of the CONCESSIONAIRE for the renewal of the CONCESSION, as approved by Supreme Decree No. 008-2021-MTC or rule that modifies, replaces or substitutes it.
AutoNDA by SimpleDocs
Evaluation Methodology. The Government will assess all responsive proposals against the solicitation requirements and criteria defined by the evaluation factors below.
Evaluation Methodology. At the end of the year, we look at the achievements of the ministry/division, compare them with the targets, and determine the composite score. Annex-3 provides an example from the Ministry of Education. For simplicity, we have taken only one objective to illustrate the evaluation methodology. The Raw Score for Achievement in Column 6 is obtained by comparing the achievement with the agreed target values. For example, the achievement for first performance indicator (% increase in primary health care centers) is 15 %. This achievement is between 80 % (Good) and 70 % (Fair) and hence the “Raw Score is 75%.” The Weighted Raw Score for Achievement in Column 6 is obtained by multiplying the Raw Score with the relative weights. Thus for the first performance indicator, the Weighted Raw Score is obtained by multiplying 75% by 0.50. This gives us a weighted raw score of 37.5% Finally, the Composite Score is calculated by adding up all the Weighted Raw Scores for achievements. In Table 1, the Composite Score is calculated to be 84.5%. The Composite score shows the degree to which theministry/division in question was able to meet its objectives. The fact that it got a score of 84.5 % in our hypothetical example implies that the ministry‟s performance vis-à-vis this objective was rated as “Very Good.” The methodology outlined above is transcendental in its application. Various ministries/division will have a diverse set of objectives and corresponding performance indicators. Yet, at the end of the year every ministry/division will be able to compute its Composite Score for the past year. This Composite Score will reflect the degree to which the ministry was able to achieve the promised results. Excellent = 100% - 96% Very Good = 95% - 86% Good = 85 – 76% Fair = 75% - 66% Poor = 65% and below
Evaluation Methodology. The evaluation methodology to be used in the project will consist of an expert-based evaluation (during pre-trials), a user-based evaluation in a controlled environment (during pre-trials), and a user-based evaluation at the elderly home (trial phase). The methodology will further provide a combination of recognized qualitative and quantitative usability analysis methods to report the findings, covering the project’s pre-trials as well as the project’s final trial. Qualitative analysis components such as user personal comments in the form of structured questionnaires and focus groups as well as expert observations, will be used. For the quantitative analysis of the system, questionnaires which will be filled in by the end users as well as their caregivers were constructed (see Appendix A and Appendix B). The pre- trials questionnaires are simpler, as certain features of the complete system will not be possible to asses due to their prototype nature. However, the trial questionnaires, along with automatically gathered measurements will provide a full picture for every indicator mentioned in section 4.4. Furthermore, a selection questionnaire (see Appendix C) will be used to ensure that the end users sample participating in the trials will be representative of the general target audience of the system. The constructed questionnaires incorporate elements of standardized and validated questionnaires adapted to our system. In detail, pre-trial questionnaires comprise of questions adapted from the System Usability Scale (SUS) and User Success Rate (USR) [1] [2], which are widely used to assess the usability of a system. Parts of Social Presence questionnaires were used in order to measure the realism and the engagement involving the avatar system [3] [4] as well as the Perception of the Personality of the avatar by the user [5]. Furthermore, WHOQL-BREF [17] questionnaire is used to measure the Quality of Life of the elder. In order to access the indicators of objective 5, the Xxxxx Xxxxxx Interview (ZBI) questionnaire, which aims in assessing the reduction of the burden of care of the caregivers, was adapted in order to build the questionnaire “PART G – Care Demand” in the trial questionnaire. Also the Groningen Activity Restriction Scale (GARS) was included in the trial questionnaire which is a non-disease-specific instrument to measure disability in activities of daily living (ADL) and instrumental activities of daily living (IADL). It was developed in studies...
Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!