Experiments and Results Sample Clauses

Experiments and Results. A. Experimental Setup 1) Success rate - how many times the robot manages to localise itself correctly. We consider that the robot has localised it self correctly if the error is less then 10 cm. 2) Initial localisation time - how long does it take to minimise the localisation error with respect to ground truth. In case of localisation failure the measurement was discarded. 3) Convergence time - how long it takes before the value of one standard deviation is less then 10 cm and 5 degrees. This metrics was computed only for the cases when the localisation was performed successfully. 4) Computation time - how much time it takes to generate the prior. We have tested five particle populations sizes (25, 100, 250, 500, 1000 particles) for maps of three different resolutions (0.2 m, 0.5 m, 1.0 m). For the coarsest map (resolution 1m), the resolution of the voxel grid in pose space was 1.5 m and π radians, and for the other two map resolutions the pose voxel grid was 0.5 m and π radians. Please recall that we have been performing our tests using NDT maps and evaluating two different priors for NDT-MCL. If we would use a regular occupancy or octomap it would be impossible to achieve accuracy below 10 cm for maps with grid cells as big as 0.5x0.5 m2 or 1.0x1.0 m2 [4].
AutoNDA by SimpleDocs
Experiments and Results. In order to assess the performance of our fuzzy representations we will com- pare them to our non-fuzzy representations from the previous chapters as well as with the other evolutionary (cefr-miner and esia) and non-evolutionary (Ltree, OC1 and C4.5) classification algorithms introduced in Chapter 2. N/A indicates that no results were available. The experimental setup and data sets are the same as in the previous chapters and are described in Sections 2.7 and 2.8. An overview of the most important gp parameters can be found in Table 4.1. In the case of our partitioning gp algorithms the criterion used, either gain or gain ratio, is indicated between brackets ‘(’ and ‘)’. The tables with results contain a column, labeled k, to indicate the number of clusters in the case of our clustering gp algorithms or the maximum number of parti- tions in the case of the partitioning gp algorithms. The best (average) result for each data set is printed in bold font. Because the partitioning gp algo- rithms do not scale well with parameter k we will only look at a maximum of three partitions, clusters or fuzzy sets per numerical valued attribute. Population Size 100 Initialization ramped half-and-half Initial maximum tree depth 6 Maximum number of nodes 63 Tournament Size 5 Evolutionary model (μ, λ) Offspring Size 200 Crossover Rate 0.9 Mutation Rate 0.9 To determine if the results obtained by our algorithms are statistically significantly different from the results reported for esia, cefr-miner, Ltree, OC1 and C4.5, we have performed two-tailed independent samples t-tests with a 95% confidence level (p = 0.05) using the reported mean and standard deviations. The null-hypothesis in each test is that the means of the two algorithms involved are equal. In order to determine whether the differences between our gp algorithms are statistically significant we used paired two- tailed t-tests with a 95% confidence level (p = 0.05) using the results of 100 runs (10 random seeds times 10 folds). In these tests the null-hypothesis is also that the means of the two algorithms involved are equal.
Experiments and Results. Since the focus of this paper is on tracking, and not detection, for the purpose of the following experiments we restrict ourselves to using 2D laser range data. The presented Algorithm 1: Cascaded logic for track initiation else
Experiments and Results. The baseline MT systems (referred to as v0) were solely trained on out-of-domain data (parallel, monolingual, and development data from Europarl). First, we exploited the in-domain development data and used it in the first modification of the baseline system (v1) instead of the out-of-domain (Europarl) data. In this case, the individual system models (translation tables, language model, etc.) remained the same, but their importance (optimal weights in the Moses' log-linear framework) was different. The in-domain monolingual data could be exploited in two ways: a) to join the general domain data and the new in-domain data into one set, use it to train one language model and optimize its weight using MERT on the in-domain development data: b) to train a new separate language model from the new data, add it to the log-linear framework and let MERT optimize its weight together with other model weights. We tested both approaches. In system v2, we followed the first option (retraining the language model on an enlarged data) and in system v3, we followed the second option (training an additional language model and optimizing). An overview of the system versions trained for the first cycle evaluation is presented in Table 16.
Experiments and Results. To compare the classification performance of our new representations with the simple representation of the previous chapter we have conducted the same experiments using the same settings and data sets (see Section 2.7). We will also compare our results to Ltree, OC1 and C4.5 and the other evolution- ary algorithms (esia and cefr-miner) already mentioned in the previous chapter. The tables with results also contain an extra column, labeled k, to indicate the number of clusters in the case of our clustering gp algorithms or the maximum number of partitions in the case of the gain gp and gain ratio gp algorithms. The best (average) result for each data set is printed in bold font. The entry N/A indicates that no results were available. To determine if the results obtained by our algorithms are statistically significantly different from the results reported for esia, cefr-miner, Ltree, OC1 and C4.5, we have performed two-tailed independent samples t-tests with a 95% confidence level (p = 0.05) using the reported mean and standard deviations. The null-hypothesis in each test is that the means of the two algorithms involved are equal. In order to determine whether the differences between our gp algorithms are statistically significant we used paired two- tailed t-tests with a 95% confidence level (p = 0.05) using the results of 100 runs (10 random seeds times 10 folds). In these tests the null-hypothesis is also that the means of the two algorithms involved are equal.
Experiments and Results. We have two accurate methods to estimate time delays: i) EA-M-CV method is an evolutionary algorithm with mixed representation (integer and real numbers), and a objective function based on kernel formulation and cross-validation [12
Experiments and Results. We used CrowdCrafting2 for recruiting workers because of a limited presence of Mongolian speakers on plat- forms such as Amazon Mechanical Turk and CrowdFlower. CrowdCrafting is free for scientific projects with volun- xxxx contributors. In phase 1, the total of 77 web users were asked to translate 947 manually built synsets from the space domain, that is, the subtree under the high-level synsets of space in (Ganbold et al., 2014b; Giunchiglia et al., 2009). In phase 2, 75 web users were asked to validate the results of phase 1. In total, contributors have completed 9,490 tasks and have introduced 6,442 words3. In order to evaluate contributions from the crowd, we com- piled a gold standard from the space domain in Mongolian, covering all synsets that were included in the crowdsourc- ing experiment. The gold standard corpus was created by 2xxxxx://xxxxxxxxxxxxx.xxx‌ 3Data collected during the two phases are available at xxxxx://xxxxxxxxxxxxx.xxx/project/mongolian-lkc and at xxxxx://xxxxxxxxxxxxx.xxx/project/mongolian-lkc-evaluation under CC-BY-SA license.
AutoNDA by SimpleDocs
Experiments and Results. ‌ The following experiments were conducted on a custom implementation of one-step semi-gradient SARSA built on top of the GNFlow package by Dr. Elizabeth Newman, named ClassicControl on a Manifold (see Appendix for details). This implementation utilizes an ε-greedy policy and linear polynomial featurizer of the state-action value function qw(s, a) with the following parameters polynomial featurizer order = 8 discount γ = 0.9 number of episodes = 400 maximum steps = 9000 εmin = (varied) εmax = 1 initial step-size α0 = (varied) 1+ t
Experiments and Results. ‌ The following experiments were conducted on a custom implementation of Proximal Policy Optimization (both Penalty and Clip varieties) with Generalized Advantage Estimation built using the TorchRL package, named Geometric Control (see Appendix for details). The implementation utilizes a size [64, 64] multi-layer perceptron with Tanh activations for both the policy and value function approximation. The policy MLP trains the parameters loc and scale for a probabilistic actor with a normalized Tanh distribution i.e. πθ ∼ TanhNormal(loc, scale), while the value MLP estimates the advantage. Both MLPs are orthonormal initialized. The advantage used is the Generalized Advantage Estimate (GAE) presented in section 2.2.5 The purpose of these experiments are to examine the difference between an empirical Fisher distribution and one obtained via Monte-Carlo methods (i.e. sampling the policy). Two algorithms are implemented — PPO-Penalty and PPO-Clip — as well as two test environments from the OpenAI Gym - MuJoCo package: HalfCheetah-v4 and InvertedPendulum-v4 [1]. The following parameters are used frames per batch = 2000 total frames = 200, 000 mini-batch size = 100 loss optimization epochs = 10 learning rate λ = 3 × 10−4 discount γ = 0.99 GAE learning rate λGAE = 0.95 PPO-Clip ε = 0.2 PPO-Penalty KL-target d = 0.01 PPO-Penalty β = 1 Monte-Carlo Fisher sample size = 20 The Monte-Carlo and empirical Fisher approximations are computed as outlined by Eq. (3.65) and Eq. (3.62) respectively. Specifically, the estimate is only collected for every mini-batch on the last optimization epoch (i.e. 2000/100 = 20 estimates per batch. For the Monte-Carlo approximation, during each mini-batch in the last epoch a sample of 20 actions is collected from the policy distribution πθ and ∇ log πθ is computed for the sample, the sum of which results in the Fisher approximation for the current batch (as per Kunstner (2020) [22]) For the empirical Fisher approximation, during each mini-batch in the last epoch the current log policy gradient is collected and the outer products are averaged over the batch. It is important to note that since the goal of these experiments is to compare the two Fisher approximations, a full training (e.g. with 1,000,000 total frames) was not conducted due to both time and computational cost restrictions, as well as the fact that the training rewards are not the object of study (the PPO implementations and test environments used are standard). The observed rewar...

Related to Experiments and Results

  • Results The five values obtained shall be arranged in order and the median value taken as a result of the measurement. This value shall be expressed in Newtons per centimeter of width of the tape.

  • Continuity of Operations Engage in any business activities substantially different than those in which Borrower is presently engaged, (2) cease operations, liquidate, merge, transfer, acquire or consolidate with any other entity, change its name, dissolve or transfer or sell Collateral out of the ordinary course of business, or (3) pay any dividends on Borrower’s stock (other than dividends payable in its stock), provided, however that notwithstanding the foregoing, but only so long as no Event of Default has occurred and is continuing or would result from the payment of dividends, if Borrower is a “Subchapter S Corporation” (as defined in the Internal Revenue Code of 1986, as amended), Borrower may pay cash dividends on its stock to its shareholders from time to time in amounts necessary to enable the shareholders to pay income taxes and make estimated income tax payments to satisfy their liabilities under federal and state law which arise solely from their status as Shareholders of a Subchapter S Corporation because of their ownership of shares of Borrower’s stock, or purchase or retire any of Borrower’s outstanding shares or alter or amend Borrower’s capital structure.

  • Test Results The employer, upon request from an employee or former employee, will provide the confidential written report issued pursuant to 4.9 of the Canadian Model in respect to that employee or former employee.

  • Operations As of the date hereof, the Company has not conducted, and prior to the IPO Closing the Company will not conduct, any operations other than organizational activities and activities in connection with offerings of its securities.

  • Audit Results If an audit by a Party determines that an overpayment or an underpayment has occurred, a notice of such overpayment or underpayment shall be given to the other Party together with those records from the audit which support such determination.

  • Narrative Results ‌ 1. For the first Quarterly Claims Review Report only, a description of (a) Xxxxx Pharmacy's billing and coding system(s), including the identification, by position description, of the personnel involved in coding and billing, and (b) a description of controls in place to ensure that all items and services billed to Medicare or a state Medicaid program by Xxxxx Pharmacy are medically necessary and appropriately documented. Subsequent Quarterly Claims Review Reports should describe any significant changes to items (a) and (b) or, if no significant changes were made, state that the systems and controls remain the same as described in the prior Quarterly Claims Review Report.

Draft better contracts in just 5 minutes Get the weekly Law Insider newsletter packed with expert videos, webinars, ebooks, and more!