Experimental Setup Sample Clauses
Experimental Setup. All runtime experiments were conducted on a standard desktop computer with an Intel(R) Core(TM) i7-4790K CPU running at 4.00 GHz (4 cores; 8 hard- ware threads), 32 GB RAM, and two Nvidia GeForce Titan Z GPUs (each consisting of two devices with 2880 shader units and 6 GB main memory) using single precision. The operating system was Ubuntu 14.4.3 LTS (64 Bit) with kernel 3.13.0-52, CUDA 7.0.65 (graphics driver 340.76), and OpenCL 1.2. All algorithms were implemented in C and OpenCL, where Swig was used to obtain appropriate Python interfaces.5 The code was compiled using gcc-4.8.4 at opti- mization level -O3. For the experimental evaluation, we report runtimes for both the construction and the query phase (referred to as “train” and “test” phases), where the focus is on the latter one (that makes use of the GPUs). We consider the following three implementations:
(1) bufferkdtree(i): The adapted buffer k-d tree implementation with both FindLeafBatch and ProcessAllBuffers being conducted on i GPUs.
(2) kdtree(i): A multi-core implementation of a k-d tree-based search, which runs i threads in parallel on the CPU (each handling a single query).
(3) brute(i): A brute-force implementation that makes use of i GPUs to pro- cess the queries in a massively-parallel manner. The parameters for the buffer k-d tree implementation were fixed to appro- priate values.6 Note that both competitors of bufferkdtree have been evaluated extensively in the literature; the reported runtimes and speed-ups can thus be put in a broad context. For simplicity, we fix the number k of nearest neighbors to k = 10 for all experiments. We focus on several data-intensive tasks from the field of astronomy. Note that a similar runtime behavior can be observed on data sets from other domains as well as long as the dimensionality of the search space is moderate (e.g., from d = 5 to d = 30). We follow our previous work and consider the psf mag, psf model mag, and all mag data sets of dimensionality d = 5, d = 10, and d = 15, respectively; for a description, we refer to Xxxxxxx et al. [8]. In addition, we consider a new dataset derived from the Catalina Realtime Transient Survey 5 The code is publicly available under xxxxx://xxxxxx.xxx/xxxxxxx/bufferkdtree. 6 For a tree of height h, we fixed B = 224−h and the number M of indices fetched from input and reinsert in each iteration of Algorithm 1 to M = 10 · B.
Experimental Setup. In order to evaluate the performance of Jam-X, we carry out experiments in two small-scale indoor testbeds deployed in office environments with USB-powered xxxxx. In the first testbed, we use JamLab, a tool for controlled interference generation [5] to evaluate the impact of interference in a real- istic and repeatable fashion. In JamLab, interference is either replayed from trace files that contain RSSI values recorded under interference, or from models of specific devices [5]. In particular, we use JamLab to emulate the interference pat- terns produced by microwave ovens, by Bluetooth, and by Wi-Fi devices. In the latter case, the interference emulates a continuous file transfer. To avoid additional interference as much as possible, we carry out the experiments in this testbed during the night, when Wi-Fi activity in the office building is lowest. In the second testbed, we do not use JamLab, but we deliberately choose an 802.15.4 channel af- fected by interference, namely channel 18. On channel 18 there is Wi-Fi traffic and sometimes also interference from microwave ovens in a nearby kitchen. For the experiments, we use two xxxxx S and R. Node S -25 dBm and 0 dBm. R replies to the message using the transmission power contained in the packet, i.e., the same one used by S. By using different transmission powers, we create different types of links for each handshake. Each packet is sent after a random interval in the order of tens of milliseconds, and nodes remain on the same channel for the whole duration of the experiment. Each experiment consists of several hundred thousand handshakes.
Experimental Setup. The purpose of the experimental study is to find out how the different evaluation techniques for finding robust optima compare when used within the same algorithmic basis, namely the (5/2DI, 35)-σSA-ES and the CMA-ES. The general experimental settings, shown in Table 8.1, restrict to one particular search space dimension size, n = 10, and an evaluation budget of 10, 000 function evaluations, which is taken as a standard setup throughout this chapter. For the assessment of the quality of each scheme, we record the final solution quality over multiple runs. Here, the final solution quality refers to a highly accurate Xxxxx-Xxxxx approximation (using m = 1000 samples) of the expected objective function value of the solution returned after each optimization run). Search space dimension size n = 10 integration using 1000 samples, (mean, std, median) over all runs, and rank sum for ranking of the algorithmic schemes
8.1: The general experimental setup. Table 8.2: The test problems used for empirical comparison.
Experimental Setup. To validate the ThoR concept, a wireless transmission experiment has been realized in a laboratory environment. Fig. 3 shows the setup used in this transmission and the spectrum of the transmitted radio frequency (RF) signals. All the components will be described in the following sections. The LO signal, generated at 8.33 GHz, can be provided by a stable frequency synthesizer or by an optical frequency comb (photonic LO). The setup can be coherent, like in Fig. 3 or incoherent, using two different LO sources for the transmitter side and receiver side.
Experimental Setup. To evaluate the proposed Fuzzy Inference method for updating the candidate points, we tested the 3D-ASM on cardiac CT data from 9 patients comparing both the simple convolution-based edge detection and the newly implemented FI-based method. For this, a statistical shape model was generated using expert drawn contours of a group of 53 patients and normals, from 3D MR data [60]. The shape parameterization pre- sented in Section 3.2.1 was applied, where each sample was divided in 16 slices, each containing 32 points for the epicardial contour and 32 points for the endocardial con- tour. To reduce model dimensionality, the model was restricted to represent 99% of the shape variation present in the training data, resulting in 33 modes of variation. The 3D ASM was applied to 9 short axis CT acquisitions of cardiac LVs. Scans were acquired with CT scanners from two different vendors, and had an axial slice thick- ness of approximately 1 mm and an in-plane resolution of 0.5 mm/pixel. All data sets were reformatted to yield short-axis image slices. Prior to matching, the model pose was initialized manually. The initial model scale was equal to the average model scale of the training set. The model shape was initial- ized to the mean training shape, whereas position was manually initialized inside the cardiac LV. The class centers of the three tissue classes used by FCM were initialized identically for each iteration and for each patient. During model matching, deforma- tion was limited by constraining each component of the model deformation parameter vector between 3σ and +3σ. The model search ran for a fixed number of iterations, the same for both the FI-based model and the convolution-based model. For the FI-based model, a two-stage matching was employed: initially, the convolution method was used until the update step size between iterations substantially decreased. From there, the final adjustments, small scale and pose changes and deformation of the model were realized using the FI-based point generation. The model states from the last iteration for both models were used for comparing the two candidate point generation methods. The method was visually evaluated to assess whether the new candidate point gener- ation method is an improvement with respect to the convolution-based technique, by comparing results from the same iteration in the matching process. In case of match- ing failure, the match was reported as failure and excluded from further quantitative e...
Experimental Setup. In this section we describe the setup for an experimental evaluation of our prototype based on a testbed cloud using the RUBiS Web application and a synthetic workload generator.
4.1. Testbed cloud
Experimental Setup. The ORION® open path analyser was deployed on the roof of an electrical substation within the facility, approximately six metres above ground. After installation and initial tests, it was noted that the turret which allows the multi-path distribution in the azimuth plane could not reliably rotate to the left of its home position. Nevertheless, to ensure reliable and consistent operation, the retroreflectors were deployed in an arc to the right of the ORION®’s forward position. This led to a reduction in spatial coverage of the monitoring but did not compromise the trial. The issue has been investigated and traced back to a design problem from a supplied part. This has now been fully resolved. Five retroreflectors (instead of 9 originally planned) were deployed in a 90° arc over a 90 x 80m area to the north-west and north-east of the instrument. The five retroreflectors were therefore designated 5-9, as shown in Figure 8. 5 482959.29 5030962.75 32 T 6 482946.67 5030983.82 32 T 7 482936.67 5031010.87 32 T 8 482976.23 5031002.96 32 T 9 483012.50 5031011.77 32 T
Experimental Setup. A · A W W A ∈
Experimental Setup. A complete description of the instrumentation used for these experiments is provided in reference,29 and a summary of details specific to BeS has been provided here. Beryllium sulfide anions were formed via pulsed laser ablation30 of a Be rod in the presence of He (25 psia) seeded with CS2 (room temperature vapor pressure). Ablation was accomplished using the second harmonic of a Nd:YAG laser (532 nm), operating with a pulse energy of ~8 mJ. The ablation products were supersonically expanded into a differentially pumped vacuum chamber that housed a Wiley-McLaren time-of-flight mass spectrometer (WM – TOFMS).31 The axis of the mass spectrometer was perpendicular to the direction of the supersonic expansion. Within the mass spectrometer the anions were accelerated into a drift region where they were directed by an Einzel lens and four sets of deflector plates. A fifth set of deflector plates could be used as a mass gate for selection of the anions of interest. The anions were directed through the center of a velocity map imaging lens. This three – electrode component was modeled after the design of Eppink and Xxxxxx.32 Photodetachment of BeS- was induced by the focused beam from a tunable dye laser (both Nd/YAG and excimer pumped dye lasers were used in these measurements). The laser beam was propagated along an axis that was perpendicular to the direction of the anion beam. The photon energies were chosen to be above the detachment threshold of BeS- with an energy of 0.5 – 1 mJ per pulse, and beam diameter <2 mm. The photodetachment lasers were frequency calibrated using the B – X absorption spectrum of room temperature I2 vapor, with line positions provided by the PGOPHER software package.33 Following photodetachment, the VMI optics focused the electrons onto a set of imaging quality microchannel plates (MCPs) paired with a phosphor screen. A CCD camera recorded the images, which were averaged over several hundred thousand laser pulses using the imaging collection software designed by Li et al.34 The images were transformed using the MEVELER program.35 The MCPs were pulsed so that only the detached photoelectrons were detected. Mu – metal shielding surrounding the photodetachment and electron drift regions minimized image distortions due to external electric and magnetic fields. A photomultiplier tube (PMT) was positioned off-axis of the phosphor screen to monitor phosphor screen emission. This detection method allowed for the optimization of the anion and...
Experimental Setup. In order to generate statistically significant results for the game of Hex (board size 11 × 11) in a reasonable amount of time, both players do playouts of 1 second for choosing a move. To calculate the playing strength for the first player, we perform matches of two players against each other. Each match consists of 200 games, 100 with White and 100 with Black for each player. A statistical method based on [Hei01] and similar to [MKK14] is used to calculate 95%-level confidence lower and upper bounds on the real winning rate of a player, indicated by error bars in the graphs. The parameter Cp is set at 1 in all our experiments. To calculate the playout speedup for the first player when considering the second move of the game, the average of the number of PPS over 200 games is measured. Taking the average removes: (1) the randomized feature of MCTS in game playing and (2) the so-called warm-up phase on the Xeon Phi [RJM+15]. The results were measured on a dual socket Intel machine with 2 Intel Xeon E5- 2596v2 CPUs running at 2.40GHz. Each CPU has 12 cores, 24 hyperthreads, and 30 MB L3 cache. Each physical core has 256KB L2 cache. The peak TurboBoost fre- quency is 3.2 GHz. The machine has 192GB physical memory. Intel’s icc 14.1 com- piler is used to compile the program. The machine is equipped with an Intel Xeon Phi 7120P 1.238GHz which has 61 cores and 244 hardware threads. Each core has 512KB L2 cache. The co-processor has 16GB GDDR5 memory on board with an aggre- gate theoretical bandwidth of 352 GB/s. The peak turbo frequency is 1.33GHz. The theoretical performance of the 7120P is 2.416 TFLOPS or TIPS and 1.208 TFLOPS for single-precision or integer and double-precision floating-point arithmetic operations, respectively [Int13]. Intel’s icc 14.1 compiler is used to compile the program in na- tive mode. A native application runs directly on the Xeon Phi and its embedded Linux operating system.