Data Analysis Sample Clauses

Data Analysis. In the meeting, the analysis that has led the College President to conclude that a reduction- in-force in the FSA at that College may be necessary will be shared. The analysis will include but is not limited to the following: ● Relationship of the FSA to the mission, vision, values, and strategic plan of the College and district ● External requirement for the services provided by the FSA such as accreditation or intergovernmental agreements ● Annual instructional load (as applicable) ● Percentage of annual instructional load taught by Residential Faculty (as applicable) ● Fall 45th-day FTSE inclusive of dual enrollmentNumber of Residential Faculty teaching/working in the FSA ● Number of Residential Faculty whose primary FSA is the FSA being analyzed ● Revenue trends over five years for the FSA including but not limited to tuition and fees ● Expenditure trends over five years for the FSA including but not limited to personnel and capital ● Account balances for any fees accounts within the FSA ● Cost/benefit analysis of reducing all non-Residential Faculty plus one Residential Faculty within the FSA ● An explanation of the problem that reducing the number of faculty in the FSA would solve ● The list of potential Residential Faculty that are at risk of layoff as determined by the Vice Chancellor of Human ResourcesOther relevant information, as requested
AutoNDA by SimpleDocs
Data Analysis. Alabama Power will ensure appropriate data analysis techniques are used in the collection of the data required for this study.
Data Analysis. As noted in the Introduction, WP6 is designed on the basis of a two-stage analysis process. In this document, only the first stage - single case analysis - is described. This process is depicted figuratively in Steps 1-3 of Figure 2 (below). Data analysis in WP6 is premised on a ‘multi-grounded theory’ (Xxxxxxxx and Xxxxxxxx, 2010) approach. This works on the principle not that new theory is induced from data analysis but that theory is essential to interpretation and knowledge production and can result in the revision or refining of theory. How this works in practice is outlined in the PROMISE Data Handbook but essentially employs standard inductive coding followed by a process of ‘theoretical matching’ and validation against both data and existing theoretical frameworks at the interpretative level. Coding was conducted by all teams using NVivo 11 computer assisted qualitative data analysis software (CAQDAS). Textual materials such as (original language) transcripts of recorded and online interviews, field diaries, social media communication and notes of informal conversations as well as relevant sound and image files were uploaded as ‘sources’ into their relevant NVivo 11 project. As depicted in Figure 2, the first step of coding consists of the coding of qualitative data sources (e.g. semi-structured interviews, field diaries, focus groups, images) in native language by partners as separate, individual projects. Ethnographic case data were coded, in the first instance, to a maximum of two hierarchical levels. After discussion with the Consortium members participating in WP6, it was agreed to employ a ‘Skeleton coding tree’ for Level 2 nodes (see Figure 2).This meant that a list of Level 2 (parent) codes (in English) were agreed by partners prior to the commencement of coding. These were imported into each Nvivo data base and used, where appropriate, as ‘parent nodes’ under which inductively generated Level 1 nodes (in native language) were grouped. Where Level 1 nodes did not fit within pre-determined Level 2 nodes – for example because this activity or experience was specific to the case - new Level 2 nodes could be created for that case. The skeleton coding tree was circulated for discussion among partners and amended following a pilot coding of excerpts of a shared interview. In practice, the coding tree worked well with new Level 2 nodes being introduced rarely. The skeleton coding tree is attached as Appendix 4. Extensive guidelines on coding, desi...
Data Analysis. 842. The Parties acknowledge that the Consultant for the ACLU Agreement is preparing a report, in consultation with an independent statistical expert, which assesses data regarding investigatory stops completed by CPD officers for the period between 2018 and 2020 (“Report”). With respect to the disparate impact compliance methodology for this Report, the City has agreed that the Consultant may (1) assume that a prima facie showing under ICRA based on disparate impact on the basis of race has been satisfied, and (2) forego that analysis. The Parties recognize that the methodology for this Report includes, but is not limited to, an analysis of the following:
Data Analysis. All error values were expressed as their mean ± standard deviation (SD). Statistical analysis of data was performed using Xxxxxx’x homogeneity test and ensured all sample group data was of acceptable distribution (P > 0.05) before statistical significance between the sample groups was assessed by one way analysis of variance (ANOVA) tests with post-hoc Tukey analysis in Origin 2016. Statistically significant differences were assumed when p ≤ 0.05.
Data Analysis. Analysis of qualitative data Analysing the data is important in order to “get the sense” of the information gathered (Xxxxxxxx, 2013). Since qualitative data is often too rich, especially if audio or visual information is analyzed, it is crucial to filter out the unnecessary parts of the interview and group the remaining data into relevant themes (Xxxxxxxx, 2013). After all interviews had been recorded, I collected the audio files into a separate folder and imported the folder into MAXQDA, a software package for qualitative and mixed methods research. Since analyzing and coding data by hand is a difficult task, many researchers, including me, rely on software solutions when analyzing qualitative data (Xxxxxxxx, 2013). First, I transcribed all the audio files using the built-in transcribing feature of MAXQDA. This helped me not only to have a text version of the interview, but also to have every single paragraph timestamped, which allowed me to listen to specific parts of the interview by clicking on the sentences in the text. After all interviews had been transcribed, I started the coding, “the process of organizing the data by bracketing chunks and writing a word representing a category in the margins” (Xxxxxxx & Xxxxxx, 2012 as cited in Xxxxxxxx, 2013, p.247). Since my interviews were semi-constructed, the questions I asked revolved around certain topics. This allowed me to use selective coding approach when coding the text segments. After I have coded all the interviews, MAXQDA allowed me to organize the coded segments into themes and topics, and to export the data into printable format. Since all interviews were conducted in Kazakh and Russian, I coded the original interviews and translated only the coded segments. Quantitative data The quantitative data was collected using Google Forms. This tool allows researchers to create a dynamic spreadsheet in Google Sheets and the data in the spreadsheet automatically updates each time a new response is submitted. The problem with Google Sheets, however is that the information is recorded “as it is”, which means that every respondent’s answer is recorded not as a value, but as the answer choice provided in the survey. This creates an extra obstacle if a researcher wants to start analyzing the data in a software package immediately. This problem, however, can be easily solved by using the built-in “find and replace” feature within Google Sheets. After all responses have been collected, I renamed the columns in t...
Data Analysis. Beginning with the 2013-2014 school year, and annually thereafter, the District will maintain data regarding the participation of students, by race and ELL status, in higher level learning opportunities. The District will additionally re-conduct the surveys described in 1.c) and 1.d) above to gather information regarding the efficacy of strategies it has implemented. The District will review the data to identify whether there remains a statistically significant disparity in the participation of underrepresented group students when compared to peers not in the underrepresented groups, in higher level learning opportunities. The District will also consider, on an annual basis, whether the strategies and plan it has implemented have proven effective, or need to be altered. If alterations are required, the District will enact such alterations within one year of identifying the need for that change.
AutoNDA by SimpleDocs
Data Analysis. Canopy cover for the population is estimated at 10%, although densitometer readings were taken before the majority of the trees had begun to produce leaves. Soil was very shallow at the site, measuring approximately 5.0 cm in depth. The soil sample collected was not large enough for proper testing. The average number of plants was of 8.5 per m2. The average number of flowering plants was 5.5 per m2. The average number of immature plants was
Data Analysis. In any Clinical Trial where BSP uses the BSP Array with a Compound, Prometheus shall provide the raw data plus condensed or processed data ready for biostatistical correlation analysis. The biostatistical analyses (i.e. the correlation of array results with clinical outcome to identify responder signals/signatures) will be conducted at BSP and, at the request of BSP, Prometheus will use Commercially Reasonable Efforts to co-operate with and support BSP’s analysis. BSP acknowledges that any failure by BSP to provide any information or data (including data derived from any Clinical Trial) in its possession and control which is necessary for Prometheus to obtain Regulatory Approval of the BSP Array or any Assay associated therewith or to otherwise Commercialize the same, Prometheus shall be excused from its Commercialization obligations hereunder with respect to such BSP Array or Assay associated therewith and shall not be required to grant any licenses pursuant to Sections 3.2, 3.3 and 3.4.
Data Analysis. Microsoft Excel and SPSS-11 were used to perform the statistical analysis and to assess numeric trends. Intraclass Correlation (ICC) was used to measure the level of agreement among physicians and nurses. There are two approaches to ICC: consistency and absolute agreement. The difference between consistency and absolute agreement measures how the systematic variability due to raters or measures is treated. If that variability is considered irrelevant, it is not included in the denominator of the estimated ICCs, and measures of consistency are produced. If systematic differences among levels of ratings are considered relevant, rater variability contributes to the denominators of the ICC estimates, and measures of absolute agreement are produced. In the current study, we used the consistency approach due to the fact that it is more suitable to Kappa statistic in our later analysis. K statistic was employed to measure the level of agreement among the physicians themselves and among the nurses themselves (quadratic weighting). The K statistic is based on a formula developed by Fleiss [13], which provides a numerical measure of agreement among multiple raters. Xxxxx’x Kappa coefficient was used to test levels of agreement between the two nurses in each unit, Xxxxx’x Kappa is more suitable than Fleiss13 K statistic to examine inter-observer agreement between two raters. The Kappa statistic measures the observed amount of agreement adjusted for the amount of agreement expected by chance alone. A value of −1.00 indicates complete disagreement, a value of 0 indicates that the agreement is no better than chance, and a value of +1.00 indicates a perfect agreement. In addition, Chi square analysis was performed in order to examine the differences between the two units in the staff members’ ratings.
Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!