Experimental Setup Sample Clauses

Experimental Setup. All runtime experiments were conducted on a standard desktop computer with an Intel(R) Core(TM) i7-4790K CPU running at 4.00 GHz (4 cores; 8 hard- ware threads), 32 GB RAM, and two Nvidia GeForce Titan Z GPUs (each consisting of two devices with 2880 shader units and 6 GB main memory) using single precision. The operating system was Ubuntu 14.4.3 LTS (64 Bit) with kernel 3.13.0-52, CUDA 7.0.65 (graphics driver 340.76), and OpenCL 1.2. All algorithms were implemented in C and OpenCL, where Swig was used to obtain appropriate Python interfaces.5 The code was compiled using gcc-4.8.4 at opti- mization level -O3. For the experimental evaluation, we report runtimes for both the construction and the query phase (referred to as “train” and “test” phases), where the focus is on the latter one (that makes use of the GPUs). We consider the following three implementations: (1) bufferkdtree(i): The adapted buffer k-d tree implementation with both FindLeafBatch and ProcessAllBuffers being conducted on i GPUs. (2) kdtree(i): A multi-core implementation of a k-d tree-based search, which runs i threads in parallel on the CPU (each handling a single query). (3) brute(i): A brute-force implementation that makes use of i GPUs to pro- cess the queries in a massively-parallel manner. The parameters for the buffer k-d tree implementation were fixed to appro- priate values.6 Note that both competitors of bufferkdtree have been evaluated extensively in the literature; the reported runtimes and speed-ups can thus be put in a broad context. For simplicity, we fix the number k of nearest neighbors to k = 10 for all experiments. We focus on several data-intensive tasks from the field of astronomy. Note that a similar runtime behavior can be observed on data sets from other domains as well as long as the dimensionality of the search space is moderate (e.g., from d = 5 to d = 30). We follow our previous work and consider the psf mag, psf model mag, and all mag data sets of dimensionality d = 5, d = 10, and d = 15, respectively; for a description, we refer to Xxxxxxx et al. [8]. In addition, we consider a new dataset derived from the Catalina Realtime Transient Survey 5 The code is publicly available under xxxxx://xxxxxx.xxx/xxxxxxx/bufferkdtree. 6 For a tree of height h, we fixed B = 224−h and the number M of indices fetched from input and reinsert in each iteration of Algorithm 1 to M = 10 · B.‌
AutoNDA by SimpleDocs
Experimental Setup. A complete description of the instrumentation used for these experiments is provided in reference,29 and a summary of details specific to BeS has been provided here. Beryllium sulfide anions were formed via pulsed laser ablation30 of a Be rod in the presence of He (25 psia) seeded with CS2 (room temperature vapor pressure). Ablation was accomplished using the second harmonic of a Nd:YAG laser (532 nm), operating with a pulse energy of ~8 mJ. The ablation products were supersonically expanded into a differentially pumped vacuum chamber that housed a Wiley-McLaren time-of-flight mass spectrometer (WM – TOFMS).31 The axis of the mass spectrometer was perpendicular to the direction of the supersonic expansion. Within the mass spectrometer the anions were accelerated into a drift region where they were directed by an Einzel lens and four sets of deflector plates. A fifth set of deflector plates could be used as a mass gate for selection of the anions of interest. The anions were directed through the center of a velocity map imaging lens. This three – electrode component was modeled after the design of Eppink and Xxxxxx.32 Photodetachment of BeS- was induced by the focused beam from a tunable dye laser (both Nd/YAG and excimer pumped dye lasers were used in these measurements). The laser beam was propagated along an axis that was perpendicular to the direction of the anion beam. The photon energies were chosen to be above the detachment threshold of BeS- with an energy of 0.5 – 1 mJ per pulse, and beam diameter <2 mm. The photodetachment lasers were frequency calibrated using the B – X absorption spectrum of room temperature I2 vapor, with line positions provided by the PGOPHER software package.33 Following photodetachment, the VMI optics focused the electrons onto a set of imaging quality microchannel plates (MCPs) paired with a phosphor screen. A CCD camera recorded the images, which were averaged over several hundred thousand laser pulses using the imaging collection software designed by Li et al.34 The images were transformed using the MEVELER program.35 The MCPs were pulsed so that only the detached photoelectrons were detected. Mu – metal shielding surrounding the photodetachment and electron drift regions minimized image distortions due to external electric and magnetic fields. A photomultiplier tube (PMT) was positioned off-axis of the phosphor screen to monitor phosphor screen emission. This detection method allowed for the optimization of the anion and...
Experimental Setup. A · A W W A ∈
Experimental Setup. In order to evaluate the performance of Jam-X, we carry out experiments in two small-scale indoor testbeds deployed in office environments with USB-powered xxxxx. In the first testbed, we use JamLab, a tool for controlled interference generation [5] to evaluate the impact of interference in a real- istic and repeatable fashion. In JamLab, interference is either replayed from trace files that contain RSSI values recorded under interference, or from models of specific devices [5]. In particular, we use JamLab to emulate the interference xxx- terns produced by microwave ovens, by Bluetooth, and by Wi-Fi devices. In the latter case, the interference emulates a continuous file transfer. To avoid additional interference as much as possible, we carry out the experiments in this testbed during the night, when Wi-Fi activity in the office building is lowest. In the second testbed, we do not use JamLab, but we deliberately choose an 802.15.4 channel af- fected by interference, namely channel 18. On channel 18 there is Wi-Fi traffic and sometimes also interference from microwave ovens in a nearby kitchen. For the experiments, we use two xxxxx S and R. Node S -25 dBm and 0 dBm. R replies to the message using the transmission power contained in the packet, i.e., the same one used by S. By using different transmission powers, we create different types of links for each handshake. Each packet is sent after a random interval in the order of tens of milliseconds, and nodes remain on the same channel for the whole duration of the experiment. Each experiment consists of several hundred thousand handshakes.
Experimental Setup. In this section we describe our testing and still under development experimental setup for human-robot cooperation in flexible manufacturing. In particular, we describe our implementation for addressing the operator’s motion tracking problem. This work is built on top of a collaborative assembly workstation (see Fig. 2 and 3) developed at the Smart Mini Factory Laboratory (SMF) of the Free University of Bozen-Bolzano. The assembly tasks consist of the assembly of different variants of pneumatic cylinders. The workstation is equipped with a mobile workbench, a block-and-tackle for lightweight applications, an integrated Kanban rack, a working procedures panel, a double lighting system, an industrial screwer and a knee lever press. Further the operator is supported by a Universal Robot UR3 cobot. The collaborative robot takes over non-value adding tasks – from a lean management stand point (Xxxx, 2009; Xxxxx et al., 2017) – like pick-and-give tasks to eliminate handling time of the operator. The sensing system is composed by a ZED-mini stereo camera and a PSENscan 2D lidar scanner with an opening angle of 275 degrees and a measurement range of up to 5.5 meters. The laser scan is aligned with the ground plane and at a fixed height of 45 cm above the ground.‌
Experimental Setup. In this section we describe the setup for an experimental evaluation of our prototype based on a testbed cloud using the RUBiS Web application and a synthetic workload generator. 4.1. Testbed cloud
Experimental Setup. In this section we present the configurations and hyper- parameters of the XXXx and ladder networks, as well the dataset used in our experiments.
AutoNDA by SimpleDocs
Experimental Setup. The ORION® open path analyser was deployed on the roof of an electrical substation within the facility, approximately six metres above ground. After installation and initial tests, it was noted that the turret which allows the multi-path distribution in the azimuth plane could not reliably rotate to the left of its home position. Nevertheless, to ensure reliable and consistent operation, the retroreflectors were deployed in an arc to the right of the ORION®’s forward position. This led to a reduction in spatial coverage of the monitoring but did not compromise the trial. The issue has been investigated and traced back to a design problem from a supplied part. This has now been fully resolved. Five retroreflectors (instead of 9 originally planned) were deployed in a 90° arc over a 90 x 80m area to the north-west and north-east of the instrument. The five retroreflectors were therefore designated 5-9, as shown in Figure 8. 5 482959.29 5030962.75 32 T 6 482946.67 5030983.82 32 T 7 482936.67 5031010.87 32 T 8 482976.23 5031002.96 32 T 9 483012.50 5031011.77 32 T
Experimental Setup. The purpose of the experimental study is to find out how the different evaluation techniques for finding robust optima compare when used within the same algorithmic basis, namely the (5/2DI, 35)-σSA-ES and the CMA-ES. The general experimental settings, shown in Table 8.1, restrict to one particular search space dimension size, n = 10, and an evaluation budget of 10, 000 function evaluations, which is taken as a standard setup throughout this chapter. For the assessment of the quality of each scheme, we record the final solution quality over multiple runs. Here, the final solution quality refers to a highly accurate Xxxxx-Xxxxx approximation (using m = 1000 samples) of the expected objective function value of the solution returned after each optimization run). Search space dimension size n = 10 integration using 1000 samples, (mean, std, median) over all runs, and rank sum for ranking of the algorithmic schemes 8.1: The general experimental setup. Table 8.2: The test problems used for empirical comparison.
Experimental Setup. To evaluate the proposed Fuzzy Inference method for updating the candidate points, we tested the 3D-ASM on cardiac CT data from 9 patients comparing both the simple convolution-based edge detection and the newly implemented FI-based method. For this, a statistical shape model was generated using expert drawn contours of a group of 53 patients and normals, from 3D MR data [60]. The shape parameterization pre- sented in Section 3.2.1 was applied, where each sample was divided in 16 slices, each containing 32 points for the epicardial contour and 32 points for the endocardial con- tour. To reduce model dimensionality, the model was restricted to represent 99% of the shape variation present in the training data, resulting in 33 modes of variation. The 3D ASM was applied to 9 short axis CT acquisitions of cardiac LVs. Scans were acquired with CT scanners from two different vendors, and had an axial slice thick- ness of approximately 1 mm and an in-plane resolution of 0.5 mm/pixel. All data sets were reformatted to yield short-axis image slices. Prior to matching, the model pose was initialized manually. The initial model scale was equal to the average model scale of the training set. The model shape was initial- ized to the mean training shape, whereas position was manually initialized inside the cardiac LV. The class centers of the three tissue classes used by FCM were initialized identically for each iteration and for each patient. During model matching, deforma- tion was limited by constraining each component of the model deformation parameter vector between 3σ and +3σ. The model search ran for a fixed number of iterations, the same for both the FI-based model and the convolution-based model. For the FI-based model, a two-stage matching was employed: initially, the convolution method was used until the update step size between iterations substantially decreased. From there, the final adjustments, small scale and pose changes and deformation of the model were realized using the FI-based point generation. The model states from the last iteration for both models were used for comparing the two candidate point generation methods. The method was visually evaluated to assess whether the new candidate point gener- ation method is an improvement with respect to the convolution-based technique, by comparing results from the same iteration in the matching process. In case of match- ing failure, the match was reported as failure and excluded from further quantitative e...
Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!