Data Sets Sample Clauses

Data Sets. Each Party hereby agrees that, for data sets that it uses to demonstrate its product (PTC Products in the case of PTC, and the RA Products in the case of RA) it will, if allowed under its agreement with the provider(s) of the data set, provide such data sets to the other Party solely for purposes of the other Party demonstrating the Combined Offering.
AutoNDA by SimpleDocs
Data Sets. Each Party hereby agrees that, for data sets that it uses to demonstrate its product (PTC Products in the case of PTC, and the RA Products in the case of RA) it will, if allowed under its agreement with the provider(s) of the data set, provide such data sets to the other Party solely for purposes of the other Party demonstrating the Combined Offering. 2. SUPPORT SERVICES DEFINITIONS 3.1 “Level 1 Support” means the resolution of Customer inquiries relating to the Combined Offerings in real time or off-line without assistance from the other Party except as otherwise agreed. 3.2. “Level 2 Support” means the technical expertise the one Party provides to the other’s technical support case managers concerning inquiries regarding the Combined Offering by phone, web-based support interface or other agreed-upon means (“Official Means”) that is necessary to resolve off-line a Customer inquiry, when Level 1 Support does not resolve the customer inquiry and when the applicable technical support representative who takes the call generating such inquiry finds it necessary to elevate the inquiry to the applicable Party’s technical support case manager for resolution off-line, who in turn finds it necessary to contact the applicable other Party to obtain from such other Party the technical expertise necessary to resolve such Customer inquiries. 3.3. “Error” is defined in the Strategic Alliance Agreement. 3.4.
Data Sets. The best benchmark for evaluating the efficiency and effectiveness of a particular methodology for identifying differentially expressed proteins is to apply that method- ology on a data set with known behavior, or on a control sample. In essence, all protein expression ratios derived from these control data sets should be known in ad- xxxxx, with allowance made for random noise contamination. In this work, we make use of LC-MS/MS labeling based protein expression data derived from a yeast protein mixture. This mixture consists of light and heavy isotope labeled yeast samples that were mixed in a 1:1 ratio and digested in solution using trypsin. Each of six result- ing gel fractions were then run in duplicate as technical replicates. Peptide/Protein identifications were made using the SEQUEST algorithm (cf. Section 2.5.1). We note here that the use of specially designed control samples to validate statistical methods is quite rare in proteomics. The third data set we use was first published in Cox and Xxxx (2007) [28]. The data come from a SILAC experiment, where HeLa cells were stimulated with EGF (Epidermal Growth Factor) for 2 hours prior to mass spectrometric analysis. The signal pairs (heavy, light ), correspond to the EGF stimulated, and control samples, respectively. The combined protein mixture was digested in solution using trypsin, and the resulting peptides were separated into 24 gel fractions. Each gel fraction was then purified and analyzed using LC-ESI (liquid chromatography - electrospray ionization) combined with MS/MS. Peptide/protein identifications were made using the MASCOT algorithm (cf. Section 2.5.2).
Data Sets. Application or algorithm mapping to hardware demands the appropriate data set on the design and testing procedure. Dataset should be representative accordingly to the algorithm and the way the final user will use this algorithm, in order to help designer to make the right decisions. An inappropriate data set can mislead the designer to decisions that will make the final system without the desired functionality or with low performance. Data sets are used at three design phases: at profiling, simulation and verification phase. These Data Sets are the same for the hardware and software designs but analyzed in different manner. At profiling phase the designer must analyze the algorithm or the application in order to find the most computationally demanding part. If the data set is not proper, then the designer can focus accidentally on a different part of the code than he or she should, and as a result he/she will map to hardware an inappropriate part. The resulting system will not achieve high performance as the hardware part will not be accelerating the most demanding aspect of the algorithm. At simulation phase, the data set has to be representative and to cover every state of the algorithm. If all states are not covered, then the system cannot be tested correctly and it will probably fail at run time. At verification phase, proper data sets lead to proper functional verification. If the data set does not cover all cases then the system will not have been properly verified and at run time it may produce wrong results. Many times these results are very difficult to be found. The well-known 1994Intel Pentium FDIV division bug is the most the famous such case. One solution could be to have a data set that will exhaustively test the algorithm. Such a solution is completely inapplicable in practically all cases, as due to state explosion the profiling, simulation and verification phases will take too long for testing. Such Datasets are the same used in software to show proper functionality.
Data Sets. The implementation of the Count-Min algorithm focused on the efficient mapping of the method on a hardware-based platform. We used a frequently used real-life data set, i.e. the worldcup‟98 [37] (wc‟98), for the algorithmic analysis, the performance evaluation and the validation of the output results. The wc‟98 data set consists of all HTTP requests that were directed within a period of 92 days to the web-servers hosting the official world-cup 1998 website. It contains a total of 1.089 billion valid requests. Each request was indexed using the web-page URLas a key. We created point queries over the Count-Min sketch data structure estimating the popularity of each web-page by counting the frequency of its appearances.
Data Sets. As described above, the Exponential Histogram is a method that can efficiently offer a probabilistic solution to the counting problem. For the testing and the evaluation of our implemented system we used, again, the real-life data set from the Worldcup ‟98 [37] (wc‟98). We streamed the data into the EH data structure. During the streaming process, we created and made queries over the EH data structure about the number of appearances of specific valid requests. The results that we took as answers were cross validated vs. the answers from Java implementation that we used as basis for our EH implementation.
Data Sets. The QualiMaster project will use high volume financial data streams. We tested and evaluated the correlation software-based system by using real data from the stock market. The stock prices are provided by an API from the SPRING that provides access to real time quotes and market depth data to the consortium.
AutoNDA by SimpleDocs
Data Sets. For testing the LDA implementation we collected and prepared data sets consisting of text files where each line is a bag of terms from a document within the particular dataset. Following data sets are available: ● Flickr emotions dataset (47MB): This dataset consist of a Flickr image metadata crawl where emotional tags like “angry”, “happy” were used as queries. Each document in this dataset is a concatenation of a particular image title, tags and description. ● Global warming dataset (64MB): This dataset is a cropped dataset to a Wikipedia article about global warming. Cropping is a technique where given a set of documents (the Wikipedia article in our case) as a seed, a set of similar documents can be selected. Thereby key phrases are extracted from the initial set and used as text queries (to Wikipedia again in our case) to obtain more similar documents and expand the initial set. Thus the dataset consist of Wikipedia articles related to the topic “global warming”. Each document in this dataset is a paragraph from one of the articles. ● Xxxxx dataset (75MB): This dataset was constructed in exact the same way as the predecessor but with the Wikipedia article about “Xxxxx” as a seed. - ● Newsgroups dataset (9.3MB): Newsgroups: The 20-Newsgroups dataset was originally collected by X. Xxxx . It consists of 19,997 newsgroup postings and is usually divided into 7 categories for supervised learning covering different areas: “alt” - atheism, “comp”- computer hardware and software, “misc”- things for sale, “rec” - baseball, hockey, cars, and bikes, “sci” - cryptography, medicine, space, and electronics, “soc” - Christianity, and “talk” - religion, politics, guns, and the middle east. The number of postings for these categories varies from 997 to 5,000. ● CS Proceedings dataset (30MB): We collected scientific publications within the Computer Science domain. We gathered 2,957 scientific documents from proceedings of the conferences in the following areas: “Databases” (VLDB, EDBT), Data mining (KDD,ICDM), “E-Learning” (ECTEL, ICWL), “IR” (SIGIR,ECIR), “Multimedia” (ACM MM, ICMR), “CH Interfaces” (CHI, IUI), and “Web Science” (WWW, HYPERTEXT). The number of publications for these categories varies from 179 to 905. We limited our selection to conferences that took place in the years 2011 and 2012 and publications with more than four pages and we removed references and acknowledgment sections. ● WWW proceeding dataset (26MB): We collected conference proceedings from t...
Data Sets. The Data Set is the intellectual property of the Licensor and its List Owners. The Licensee acknowledges this ownership of the Intellectual Property by the Licensor and agrees that it is an original work that has been created, developed and maintained by the Licensor and its List Owners who have spent considerable time and expense in its compilation and authorship. The loss or abuse of this Intellectual Property could therefore constitute a considerable financial loss to the Licensor and the List Owners and that the Licensee agrees that they will not commit, or cause to be committed, any act that will damage the value of the Intellectual Property in the Data Sets. It is the responsibility of the Licensee to determine the fitness for a particular purpose of the Data Set. The Licensor offers no guarantee as to the fitness for a particular purpose of the Licensed Data Set and the maximum extent of the liability of the Licensor to the Licensee is the License fee paid for the Data Set under the Contract. The Licensee agrees to the following additional usage terms for the Data Set Licensed:
Data Sets. The table below presents updated information proving the compliance of the project with Article 29 of the Grant Agreement. It contains information about the project publications and the research data needed to validate the results presented in the deposited scientific publications. Completed information about publications themselves are available in D7.6. Type of scientific publication Title of the scientific publication DOI Authors Title of the journal or equivalent Place of publication Year of publicati on Peer- review Is/Will open access provided to this publication Publication Repository Dataset Repository Publication in Conference proceeding/W orkshop Real-Time Connectivity Capabilities of Cellular Network for Smart Grid Applications n/a Xxxx Xxxxxxxxx, Xxxxx Xxxxxxx, German Xxxxxxxx Xxxxxxx EuCNC 2018 conference Ljubljana 2018 YES n/a n/a n/a Publication in Conference proceeding/W orkshop Reasoning on Adopting OPC UA for an IoT- Enhanced Smart Energy System from a Security Perspective 10.1109/C BI.2018.10 060 Xxxxxx Xxxxxxxxxxx 2018 IEEE 20th Conference on Business Informatics (CBI) Vienna 2018 YES Yes - Green OA n/a n/a Publication in Conference proceeding/W orkshop Novel power electronics and used EV batteries in grid optimisation xxxx://xxx.xx g/10.5281/ zenodo.32 05125 Xxxxxxxxx Xxxx- Xxxxxxxx INVADE Black Sea 2018 Workshop Varna 2018 NO n/a n/a n/a Publication in Conference proceeding/W orkshop Low Voltage Grid Operation Scheduling with Uncertainties xxxx://xxx.xx g 10.1007/97 8-3-030- 20055- 8_47 Xxxxxx Xxxxxx, Xxxxxx Xxxxxxx- Xxxxxxxx, Xxxx Xxxxxxx SOCO- International Conference on Soft Computing Models in Industrial and Sevilla 2019 YES Yes- Green OA xxxx://xxx.xxxxx x.xxx/00000/0 6678 Confidential Environmental Applications Publication in Conference proceeding/W orkshop Methodology for the sizing of a hybrid energy storage system in low voltage distribution grids 10.1109/M PS.2019.8 759696 Xxxxxxxx Xxxxxx- Xxxxxxxxxx, Xxxxxxxxx Xxxx-Xxxxxxxx, Xxxxxxx Xxxxxx, Xxxxxx Xxxxxxx- Xxxxxxx, Xxxxx Xxxxxxx, Xxxxx Xxxxxxx-Xxxxxxxxx A: International Conference on Modern Power Systems. "Proceedings of 2019 8th International Conference on Modern Power Systems (MPS) Cluj- Napoca 2019 YES Yes xxxxx://xxx.xxx/ 10.5281/zenod o.3240014 xxxxx://xxxxxxxxx.xxx. edu/handle/2117/337124 Publication in Conference proceeding/W orkshop Resolvd - renewable penetration levered by efficient low voltage distribution grids. Specifications and use case analysis xxxx://xx.xx ...
Time is Money Join Law Insider Premium to draft better contracts faster.