Results Achieved and Impact Sample Clauses

Results Achieved and Impact. At the start of the project there were no specific KPIs set. However, the numbers of visitors and of interactions with the tool have demonstrated the success and impact of the case. In the first three months from the official project launch there were about 10.000 unique visitors in the platform. Regarding My2050 there are over 16.000 pathways up to the date. Regarding the stakeholders, about 200 were involved in the initial (building) phase and after the launch about 500 stakeholders were contacted. Moreover, a week-long online debate including 5-6 experts took place with lots of comments from open public. One of the project’s main purposes was (and still is) to inform policy makers in a documented manner; from this point of view, it can be considered as successful. The most concrete example is the UK “Carbon Plan 2011” government document (how will the UK look in 2050), published in late 2011 which included as one of the main pieces of evidence and visualisation the 2050 Pathways Calculator. In addition, the same tool was used in budget Annual Energy Statements. Moreover, the tool was used in General Election briefing work. It is important to note that there are Master’s programs, both in and outside of the UK, that engage the 2050 Pathways models and tools in their courses. In addition, the my2050 game is also communicated to pupils of various schools in the UK; there is a “schools’ toolkit” available and downloadable from the project’s website, as well as from other websites, including the department of Education website. It has to be noted that due to the project’s open source nature, it is quite difficult to tell how many and who exactly are using the platform. In addition, a large number of presentations have been conducted in workshops, schools, conferences, NGOs, international colleagues etc. A presentation was made to the European Commission too. Really positive media coverage has also been noticed (around 15 key articles regarding the project57,58). Other references to the case have also been made (e.g. cultural festivals). The main pillar of the success of the project is definitely the innovations that it brings to life. One of these core innovations is the radical transparency and the ease of use. The model aims to encompass all technically possible futures and form a fruitful debate based on realistic scenarios (and not on guesses). The model provides actually valuable feedback to high-level decision makers relative to communicating and inter...
AutoNDA by SimpleDocs
Results Achieved and Impact. The main achievement of GLEAM so far was the production of the forecast for the H1N1 pandemic in real-time which was a quite successful exercise and showed the power of the model. A validation paper67 has been published in December 2012 showcasing that the GLEAM predictions were quite spot on. Many stakeholders are also using the software and support their policy-making procedures in terms of designing measures to prevent or constrain the spread of diseases. Examples include the US Defence Agency, the JRC, and other corporations that are using the software. It has to be noted that JRC is using the tool in its long-term strategy for studying and responding to the spread of epidemics (through communicating the simulation results to XX XXXXX policy officers), based on the experience that has been accumulated from using the GLEAM toolkit during the H1N1 disease. The core innovation of GLEAM lies within the computational model which can integrate data from various sources and provide a close to real time forecast (by combining various real-time data sources) on the spread of epidemics on a global level, which was not possible before at that level of precision and punctuality. Moreover, through the visual interface users are in a position to create their own models and investigate specific diseases and issues that they are interested in.
Results Achieved and Impact. One of the first and main indicators was the participation rate; users that arrive in the platform for the first time and those that become active participants. People that arrive in websites are always more than those who actually participate (in some projects the rate was close to 50% and in others around 10%). In the State Department instance (of Opinion Space 3.0), more than 2000 different ideas were collected (about US foreign policy). In addition, more than 5000 individual responses were collected. It cannot be said whether the final decisions were based on some of the ideas provided, but a detailed report was provided to the policy makers. The project with a US auto-maker (targeted towards recognising ways of improving their image) resulted to about 1000 ideas and about 100.000 ratings evaluating these ideas (e.g. more specifically they talked about green vehicles). Based on the previous paragraphs and to Opinion Space’s understanding, the results exceeded even the optimistic expectations, taking into consideration that the target groups are specific and limited in most of the implementations. If the cases targeted towards vast amounts of open public, the goal was not met. But in terms of specific target groups, they exceeded expectations. One of the core innovations and successes of Opinion Space is the very fast way to browse (and rate) amongst a large number of ideas (even if this is a visualisation-oriented innovation). From the scientific point of view, the greatest innovation was bringing statistical analysis in structured discussion/ data. One of the best endorsements regarding Opinion Space was Xxxxxxx Xxxxxxx’x reference to the initiative. Other endorsements include high level officers of collaborating companies as presented in the Opinion Space website. As far as the Opinion Space team is aware of, Opinion Space has not yet been incorporated in any formal decision making procedures. The State Department, however, uses “informally” Opinion Space in order to get ideas and opinions on specific policies.
Results Achieved and Impact. As far as the impact is concerned, the European case is not at the same level as the US ones. In the US there are quite a number of MPOs that actively utilize the UrbanSim platform. The most indicative application, representing the approach common in the US, is probably the San Francisco Bay one. The results of the aforementioned case have involved examining and analysing five alternative scenarios that required articulating a set of assumptions about land use policies, transport policies and macro- economic growth (the analysis in now complete – relevant publications will be available in the next few months). In one of them, analysing visibility of the proposed policy though reverse engineering was attempted, that made the task much more challenging, both in terms of research and implementation. The agency has now accepted the results, with documentation and visualization supporting them. In the San Francisco case, the 3D visualization system (output shown in Figure 12) was created in order to achieve higher visibility amongst citizens than the plain UrbanSim tool. The intention was to use this system in a number of workshops held during January 2012. User engagement was intense even from the development/testing phase. In addition, the public agencies used it in a series of meetings with community organizations. Each of these meetings had from 15 up to 200 participants each. The point of these meetings was to communicate the different scenarios to the public and to receive feedback on the preferences of the citizens. One of the most innovative elements of UrbanSim is the combination of various technological and theoretical aspects, as well as the withdrawal of strong assumptions regarding urban planning and adoption of less strong assumptions (than markets are an equilibrium). For example, the impacts of transport projects on urban planning are far from being instantaneously realized (in fact they might evolve over decades). In addition, the capacity of being able to support these less strong assumptions can also be considered as a core innovation. The core innovation in the particular case of San Francisco can be found between the following two:
Results Achieved and Impact. Question 3.1 - What are the main results achieved by the case/initiative? What are the key indicators of the project/ initiative (either impact-oriented or operation/ technology-oriented)? How were they selected/ developed? Were/ Are they met? At the start of the project there was no specific set of KPIs set. In the first three months from the official project launch there were about 10.000 unique visitors in the platform. Regarding the my2050 there are over 16.000 pathways up to the date. Regarding the stakeholders, about 200 were involved in the initial (building) phase and after the launch about 500 stakeholders were contacted. Moreover, a week-long online debate including 5-6 experts took place with lots of comments from open public. In addition, a large number of presentations have been conducted in workshops, schools, conferences, NGOs, international colleagues etc. A presentation was made to the European Commission too. Really positive media coverage has also been noticed (around 15 key articles regarding the project). Other references to the case have also been made (e.g. cultural festivals).
Results Achieved and Impact. Question 3.1 - What are the main results achieved by the case/initiative? What are the key indicators of the project/ initiative (either impact-oriented or operation/ technology-oriented)? How were they selected/ developed? Were/ Are they met? The are number from of active users is above 100. However many user institutional laboratories, Universities. Thus they accounts correspond to multiple individual users.

Related to Results Achieved and Impact

  • Evaluation Cycle Goal Setting and Development of the Educator Plan

  • Evaluation Results A. Evaluation results shall be used:

  • Evaluation Criteria 5.2.1. The responses will be evaluated based on the following: (edit evaluation criteria below as appropriate for your project)

  • Justification and Anticipated Results The Privacy Act requires that each matching agreement specify the justification for the program and the anticipated results, including a specific estimate of any savings. 5 U.S.C. § 552a(o)(1)(B).

  • Target Population The Grantee shall ensure that diversion programs and services provided under this grant are designed to serve juvenile offenders who are at risk of commitment to Department.

  • Expected Results VA’s agreement with DoD to provide educational assistance is a statutory requirement of Chapter 1606, Title 10, U.S.C., Chapter 1607, Title 10, U.S.C., Chapter 30, Title 38, U.S.C. and Chapter 33, Title 38, U.S.C (Post-9/11 GI Xxxx). These laws require VA to make payments to eligible veterans, service members, guard, reservist, and family members under the transfer of entitlement provisions. The responsibility of determining basic eligibility for Chapter 1606 is placed on the DoD. The responsibility of determining basic eligibility for Chapter 30 and Chapter 33 is placed on VA, while the responsibility of providing initial eligibility data for Chapter 30 and Chapter 33 is placed on DoD. Thus, the two agencies must exchange data to ensure that VA makes payments only to those who are eligible for a program. Without an exchange of enrollment and eligibility data, VA would not be able to establish or verify applicant and recipient eligibility for the programs. Subject to the due process requirements, set forth in Article VII.B.1., 38 U.S.C. §3684A, VA may suspend, terminate, or make a final denial of any financial assistance on the basis of data produced by a computer matching program with DoD. To minimize administrative costs of implementation of the law and to maximize the service to the veteran or service member, a system of data exchanges and subsequent computer matching programs was developed. The purposes of the computer matching programs are to minimize the costs of administering the Xxxxxxxxxx GI Xxxx — Active Duty, the Xxxxxxxxxx GI Xxxx — Selected Reserve, Reserve Educational Assistance Program, and the Post-9/11 GI Xxxx program; facilitate accurate payment to eligible veterans or service members training under the Chapter of the Xxxxxxxxxx GI Xxxx — Active Duty, the Xxxxxxxxxx GI Xxxx — Selected Reserve, Reserve Educational Assistance Program, and the Post-9/11 GI Xxxx program; and to avoid payment to those who lose eligibility. The current automated systems, both at VA and DoD, have been developed over the last twenty-two years. The systems were specifically designed to utilize computer matching in transferring enrollment and eligibility data to facilitate accurate payments and avoid incorrect payments. The source agency, DMDC, stores eligibility data on its computer based system of record. The cost of providing this data to VA electronically are minimal when compared to the cost DMDC would incur if the data were forwarded to VA in a hard-copy manner. By comparing records electronically, VA avoids the personnel costs of inputting data manually as well as the storage costs of the DMDC documents. This results in a VA estimated annual savings of $26,724,091 to VA in mailing and data entry costs. DoD reported an estimated annual savings of $12,350,000. A cost-benefit analysis is at Attachment 1. In the 32 years since the inception of the Chapter 30 program, the cost savings of using computer matching to administer the benefit payments for these educational assistance programs have remained significant. The implementation of Chapter 33 has impacted the Chapter 30 program over the past 8 years (fiscal year 2010 through fiscal year 2017). Statistics show a decrease of 23 percent in the number of persons who ultimately use Chapter 30 from fiscal year 2015 to 2016. The number of persons who use Chapter 33 has consistently been above 700,000 in the past four years. VA foresees continued cost savings due to the number of persons eligible for the education programs.‌

  • Evaluation Cycle: Annual Orientation A) At the start of each school year, the superintendent, principal or designee shall conduct a meeting for Educators and Evaluators focused substantially on educator evaluation. The superintendent, principal or designee shall:

  • Results The five values obtained shall be arranged in order and the median value taken as a result of the measurement. This value shall be expressed in Newtons per centimetre of width of the tape. Annex 7 Minimum requirements for sampling by an inspector

  • Target Audience The target audience for this policy includes, but is not limited to, all faculty, trainees/students, and other members of MD Anderson’s workforce, including Facilities Management (FM) Project Managers, FM Operations and Maintenance (O&M) Staff, Contractors, and Stakeholders who request a Scheduled Utility Outage for: • New construction. • Renovation. • Maintenance.

  • Value Engineering The Supplier may prepare, at its own cost, a value engineering proposal at any time during the performance of the contract. The value engineering proposal shall, at a minimum, include the following;

Time is Money Join Law Insider Premium to draft better contracts faster.