Data Analysis Sample Clauses

Data Analysis. In the meeting, the analysis that has led the College President to conclude that a reduction- in-force in the FSA at that College may be necessary will be shared. The analysis will include but is not limited to the following: ● Relationship of the FSA to the mission, vision, values, and strategic plan of the College and district ● External requirement for the services provided by the FSA such as accreditation or intergovernmental agreements ● Annual instructional load (as applicable) ● Percentage of annual instructional load taught by Residential Faculty (as applicable) ● Fall 45th-day FTSE inclusive of dual enrollmentNumber of Residential Faculty teaching/working in the FSA ● Number of Residential Faculty whose primary FSA is the FSA being analyzed ● Revenue trends over five years for the FSA including but not limited to tuition and fees ● Expenditure trends over five years for the FSA including but not limited to personnel and capital ● Account balances for any fees accounts within the FSA ● Cost/benefit analysis of reducing all non-Residential Faculty plus one Residential Faculty within the FSA ● An explanation of the problem that reducing the number of faculty in the FSA would solve ● The list of potential Residential Faculty that are at risk of layoff as determined by the Vice Chancellor of Human ResourcesOther relevant information, as requested
AutoNDA by SimpleDocs
Data Analysis. Alabama Power will ensure appropriate data analysis techniques are used in the collection of the data required for this study.
Data Analysis. The data gathered from the interviews was fully transcribed by hand, and coded using axial and thematic coding (Merriam & Xxxxxxx, 2016; See Appendix E). Interviews were transcribed in Russian and Kazakh languages, and then coded and analyzed in English. Also, excerpts from the interviews were translated into English and were included as part of the findings. To ensure the accuracy of the translation the excerpts were shown to an English teacher who possesses full command of both Kazakh and Russian languages. She was asked to translate the English excerpts back into Russian and Kazakh, which were then compared to the original excerpts from the transcripts. All individually identifiable information was removed from the excerpts and the transcripts so it was impossible to indicate the participants. Minor changes were made in the English version of excerpts to ensure the precision of the language. The data was first coded inductively. Such inductive approach resulted in taking a relatively similar approach to grounded theory in trying to make sense of the data. For the successful application of the grounded theory approach three main concepts were kept in mind: “constant comparison, theoretical sampling, and saturation” (Xxxx et al., 2014, p. 392). Data analysis was conducted the day after the interview or during the week in order to cope with large amounts of information more effectively. Upon completing the process of transcribing and preliminary data analysis, new categories that emerged were analyzed in a table where the answers of all ten participants were put in rows according to the themes. This table allowed constant comparison between categories comparing the participants individually and looking for differences and similarities within and across the departments. It means that two stages of analysis were carried out: “the within-case analysis and the cross-case analysis” (Xxxxxxx & Xxxxxxx, 2016, p. 234). In addition, upon identifying the first set of categories, new concepts and questions appeared and were asked in further interviews which is an example of theoretical sampling (Xxxxxx & Xxxxxxx, 2008). Groups of concepts were generated based on similar answers which formed certain categories. The next step was sorting categories, where unnecessary categories were eliminated and new subcategories added.
Data Analysis. 842. The Parties acknowledge that the Consultant for the ACLU Agreement is preparing a report, in consultation with an independent statistical expert, which assesses data regarding investigatory stops completed by CPD officers for the period between 2018 and 2020 (“Report”). With respect to the disparate impact compliance methodology for this Report, the City has agreed that the Consultant may (1) assume that a prima facie showing under ICRA based on disparate impact on the basis of race has been satisfied, and (2) forego that analysis. The Parties recognize that the methodology for this Report includes, but is not limited to, an analysis of the following:
Data Analysis. All error values were expressed as their mean ± standard deviation (SD). Statistical analysis of data was performed using Xxxxxx’x homogeneity test and ensured all sample group data was of acceptable distribution (P > 0.05) before statistical significance between the sample groups was assessed by one way analysis of variance (ANOVA) tests with post-hoc Tukey analysis in Origin 2016. Statistically significant differences were assumed when p ≤ 0.05.
Data Analysis. Analysis of qualitative data Analysing the data is important in order to “get the sense” of the information gathered (Xxxxxxxx, 2013). Since qualitative data is often too rich, especially if audio or visual information is analyzed, it is crucial to filter out the unnecessary parts of the interview and group the remaining data into relevant themes (Xxxxxxxx, 2013). After all interviews had been recorded, I collected the audio files into a separate folder and imported the folder into MAXQDA, a software package for qualitative and mixed methods research. Since analyzing and coding data by hand is a difficult task, many researchers, including me, rely on software solutions when analyzing qualitative data (Xxxxxxxx, 2013). First, I transcribed all the audio files using the built-in transcribing feature of MAXQDA. This helped me not only to have a text version of the interview, but also to have every single paragraph timestamped, which allowed me to listen to specific parts of the interview by clicking on the sentences in the text. After all interviews had been transcribed, I started the coding, “the process of organizing the data by bracketing chunks and writing a word representing a category in the margins” (Xxxxxxx & Xxxxxx, 2012 as cited in Xxxxxxxx, 2013, p.247). Since my interviews were semi-constructed, the questions I asked revolved around certain topics. This allowed me to use selective coding approach when coding the text segments. After I have coded all the interviews, MAXQDA allowed me to organize the coded segments into themes and topics, and to export the data into printable format. Since all interviews were conducted in Kazakh and Russian, I coded the original interviews and translated only the coded segments. Quantitative data The quantitative data was collected using Google Forms. This tool allows researchers to create a dynamic spreadsheet in Google Sheets and the data in the spreadsheet automatically updates each time a new response is submitted. The problem with Google Sheets, however is that the information is recorded “as it is”, which means that every respondent’s answer is recorded not as a value, but as the answer choice provided in the survey. This creates an extra obstacle if a researcher wants to start analyzing the data in a software package immediately. This problem, however, can be easily solved by using the built-in “find and replace” feature within Google Sheets. After all responses have been collected, I renamed the columns in t...
Data Analysis. Beginning with the 2013-2014 school year, and annually thereafter, the District will maintain data regarding the participation of students, by race and ELL status, in higher level learning opportunities. The District will additionally re-conduct the surveys described in 1.c) and 1.d) above to gather information regarding the efficacy of strategies it has implemented. The District will review the data to identify whether there remains a statistically significant disparity in the participation of underrepresented group students when compared to peers not in the underrepresented groups, in higher level learning opportunities. The District will also consider, on an annual basis, whether the strategies and plan it has implemented have proven effective, or need to be altered. If alterations are required, the District will enact such alterations within one year of identifying the need for that change.
AutoNDA by SimpleDocs
Data Analysis. Canopy cover for the population is estimated at 10%, although densitometer readings were taken before the majority of the trees had begun to produce leaves. Soil was very shallow at the site, measuring approximately 5.0 cm in depth. The soil sample collected was not large enough for proper testing. The average number of plants was of 8.5 per m2. The average number of flowering plants was 5.5 per m2. The average number of immature plants was
Data Analysis. In any Clinical Trial where BSP uses the BSP Array with a Compound, Prometheus shall provide the raw data plus condensed or processed data ready for biostatistical correlation analysis. The biostatistical analyses (i.e. the correlation of array results with clinical outcome to identify responder signals/signatures) will be conducted at BSP and, at the request of BSP, Prometheus will use Commercially Reasonable Efforts to co-operate with and support BSP’s analysis. BSP acknowledges that any failure by BSP to provide any information or data (including data derived from any Clinical Trial) in its possession and control which is necessary for Prometheus to obtain Regulatory Approval of the BSP Array or any Assay associated therewith or to otherwise Commercialize the same, Prometheus shall be excused from its Commercialization obligations hereunder with respect to such BSP Array or Assay associated therewith and shall not be required to grant any licenses pursuant to Sections 3.2, 3.3 and 3.4.
Data Analysis. Microsoft Excel and SPSS-11 were used to perform the statistical analysis and to assess numeric trends. Intraclass Correlation (ICC) was used to measure the level of agreement among physicians and nurses. There are two approaches to ICC: consistency and absolute agreement. The difference between consistency and absolute agreement measures how the systematic variability due to raters or measures is treated. If that variability is considered irrelevant, it is not included in the denominator of the estimated ICCs, and measures of consistency are produced. If systematic differences among levels of ratings are considered relevant, rater variability contributes to the denominators of the ICC estimates, and measures of absolute agreement are produced. In the current study, we used the consistency approach due to the fact that it is more suitable to Kappa statistic in our later analysis. K statistic was employed to measure the level of agreement among the physicians themselves and among the nurses themselves (quadratic weighting). The K statistic is based on a formula developed by Fleiss [13], which provides a numerical measure of agreement among multiple raters. Xxxxx’x Kappa coefficient was used to test levels of agreement between the two nurses in each unit, Xxxxx’x Kappa is more suitable than Fleiss13 K statistic to examine inter-observer agreement between two raters. The Kappa statistic measures the observed amount of agreement adjusted for the amount of agreement expected by chance alone. A value of −1.00 indicates complete disagreement, a value of 0 indicates that the agreement is no better than chance, and a value of +1.00 indicates a perfect agreement. In addition, Chi square analysis was performed in order to examine the differences between the two units in the staff members’ ratings.
Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!