EXPERIMENTAL PROCEDURE Sample Clauses
EXPERIMENTAL PROCEDURE. All study participants came to the same room (individually) and carried out the experiment on the same laptop, to keep factors such as lighting conditions and screen setup constant and ensure that they did not influence the image analysis task. Before using each interface, each participant completed an online tutorial to learn how to use the tools, looking for change on a separate example image. Participants then used each of the interfaces in turn to for 10 minutes - answering a simple yes/no question for each image pair: “do surface features change between the two images?”; to mitigate bias caused by learning of the system, the order in which the interfaces were presented was manipulated so that the same number of participants tested the interfaces in the same order. The order in which image pairs were displayed to each participant was also randomised, to prevent bias being caused by image content (images with or without change appearing in the same interface each time etc.). After using each interface, participants completed the questionnaire to share their views as previously described.
EXPERIMENTAL PROCEDURE. The testing of the fluids is being performed in a three-step process. The first step is a basic "boil down" test wherein the fluids are heated at a low heat flux and the water allowed to completely boil-off. The consistency of the solution is observed during this boil-down test and the final solids, or liquid residue measured. This test provides a worst-case scenario as to the level of deposition/fouling that could occur within the distiller. The second test is performed in a basic "bench-top" distiller that boils the water and condenses the distillate in a simple air-cooled condenser. In this test, the solution is boiled-down to 10% of the original volume, that which is equivalent to the concentration of the concentrate in the actual Ovation distiller. This test provides a means of observing the purity of the distillate and concentrate before testing in the Ovation distiller. If the residue were of a nature that would likely foul the heat exchangers in the Ovation distiller, no further testing would be performed. The third test is performed with the actual Ovation Alpha unit. This developmental unit has a terminal capacity of 9 to 12 gallons per hour, depending on the compressor drive configuration. For these tests, the compressor was operated at an 8 gallons per hour flow rate, or three-quarter capacity, to minimize potential operational issues that could occur while testing with these new fluids. It is anticipated that future Beta units, with a capacity of up to 20 gallons per hour, will perform in a similar fashion. Shown in Figure 2.1 is a flow diagram of the system and in Figure 2.2 are photographs of the test unit. The sample water first passes through a 5 micron filter to remove large particulates. The liquid is then heated to approximately 95 C in a counterflow heat exchanger. The heated water is again filtered in a 1 micron cartridge filter, after which it enters the distiller. The one micron filter is used to capture particulates which have precipitated out of the solution or viscous fluids formed while the solution was being heated. A quantity of the concentrate, approximately one gallon per hour, is continuously withdrawn from the unit so as not to over-concentrate the solution. Samples are taken at the inlet to the distiller, the concentrate, and the pure distillate outlet streams. In operation, the unit is brought up to operating temperature using clean-water to define a baseline condition. Then approximately 5 gallons of the sample liquid a...
EXPERIMENTAL PROCEDURE. The study was built and conducted using the PsychoPy2 software (▇▇▇▇▇▇, 2009; 2007). The software allowed the experiment to call and display images from the library and record participant responses. The experiment was run on a desktop computer with participants responding using the keyboard. The study was designed to have participants in both strategies view stimuli drawn without replacement from the image library in a random order and respond accordingly. The experiment would record which stimuli was displayed, and the participant response and response time for that stimuli. Table 3.1 displays the breakdown of the 30 participants into their respective conditions. General Strategy x15 Specific Strategy Cut condition x3 x15 Flat condition x3 Dent condition x3 Glue condition x3 Scratch condition x3 After listening to a briefing and having their questions answered, they would then sign their consent. Both strategies of the study would then have participants assigned to a condition. As the participants were all university students and none of them could have been considered as experts in visual inspection. In order to familiarise them with the task, they would complete a short practice session with instantaneous feedback. It was limited to only 30 trials to avoid any potential bias to be introduced by the presence of rapid feedback (▇▇▇▇▇ & ▇▇▇▇▇▇▇, 1973). This form of training is also commonly found in citizen science projects and is an approach we also intend to take. The General strategy asked participants to indicate which of the categories the stimuli displayed belonged to or if it was a normal sample. In the Specific condition, participants were assigned a defect category and were asked to only reject samples if the stimuli presented contained a defect from their assigned category. Upon conclusion of the study the participants were asked to complete the NASA Task Load Index (NASA- TLX), a cognitive workload assessment tool (NASA, 1986). This tool is used throughout the remainder of this report and is worth going into some detail here. The NASA TLX is generally regarded as an industry standard for the self-report of workload on a task and as such has been used in at least 4,000 published studies across myriad situations and industries (▇▇▇▇, 2006). It is multi-dimensional subjective rating procedure that generates workload scores based upon six subscales: • Mental demand (How much mental and perceptual activity was required?). • Physical demand (How much ...
EXPERIMENTAL PROCEDURE. Participants tested three versions of the interface for ten minutes over an hour in total, which included time for the introduction and filling in a feedback survey after each one. Each participant completed ten minutes on one of the three (A) interfaces (see table 3.2) in which the crowd had classified all (change and no change) imagery correctly but to differing levels of consensus. Likewise they spent ten minutes on one of three (B) interfaces in which half of the imagery containing changes, and half the imagery containing no changes, was classified incorrectly to different levels of consensus; for the final ten minutes participants carried out the task with an image set (C) in which the incorrect and correct imagery of the second image set was switched, so that results were due to the algorithm’s accuracy.
EXPERIMENTAL PROCEDURE. Each participant completed the experiment in the same room but at different times. On arrival, participants received an explanation of the project and their task, before they signed a paper copy of the information sheet and consent form, as approved by the Faculty of Engineering’s Ethics Board. The debrief emphasised that participants should mark changes on the surface and not changes in lighting or image quality, for example, that might occur due to differences in atmospheric or photographic conditions. When the participant indicated that they understood what was required and had no further questions they completed an introductory questionnaire; this captured basic demographic data, which our experience and previous research suggested might support data analysis. Following a demonstration of the task, participants had the opportunity to complete the task for one image pair to familiarise themselves with the user interface. Figure 3.14 shows the task that confronted participants. Images were bordered in red if the algorithm suggested there was no change in the two images, and green if the algorithm suggested there was a change. Participants, however, were invited to judge the images independently and to mark where on the images they saw change using the rectangular drawing tool provided, coloured red and labelled “Area of change”. If they did not see any changes, they clicked on “Done” and moved to the next image pair. If they did see a change and draw a rectangle, however, a window would pop and ask them “What surface feature have you marked?” Underneath the question were four choices, from which they could only select one with its radio button.
