Main Results. ^√ ^ In this paper we focus on two optimization problems, namely the sparse Minimum Bisection Problem (sMBP) and the Xxxxxx Quadratic Assignment Problem (LQAP). Formal definitions are given below. However, we should add that many of our results hold for a larger class of optimization problems as long as log m = o(N ) (see Xxxxxxxxxxx, 1995). In the rest of the paper we will utilize the temperature rescaling β = β log m/N with β = O(1) which together with log m = o(N ) explains β 0 limit. This rescaling was justified in (Xxxxxxx et al., 2014). ^ For these two problems we shall provide tight asymptotics for the free en- ergy (5), and compute asymptotically the log-posterior agreement as well as β∗ that maximizes the posterior kernel.
2.1. Minimum bisection and quadratic assignment optimization problems This section introduces combinatorial optimization problems that will be used to describe our findings. These problems fall into the log m = o(N ) class specified in Sec. 1.2 and cover a wide range of practical applications in signal processing and neural information processing. Minimum bisection problem (MBP). Consider a complete undirected weighted graph G = (V, E, X) of n vertices, where n is an even number. The input data instance X is represented by (random) weights (Wi)i∈E of the graph edges. Σ ∈S
Main Results. In a first step a literature study was performed to screen observed pollutant concentrations of organic acids (formic acid, acetic acid), formaldehyde and volatile organic compounds (VOCs) within museum enclosures used for exhibiting and storing movable assets. This research showed that there are still pollutant problems in showcases throughout Europe, as shown from the investigation done in the EU FP5 MASTER project, see Figure 15.
Main Results. Figure 1 shows the fringe visibility loss resulting from the mirror vibrations in var- ious operational conditions and for three different observing wavelengths: visible (0.6 μm), near infrared (2.2 μm) and ther- mal infrared (10 μm). The VLTI error budgets call for a 1% visibility loss due to vibrations inside the telescope for any of these observing wavelengths. This cor- responds respectively to an OPL varia- tion of 14, 50 and 215 nanometers r.m.s.
Main Results. In this paper we prove the following theorem.
Theorem 1.1. For k < n, there is no deterministic wait-free protocol in the shared atomic registers model which solves the k-set agreement problem in a system of n processors.
(i) two schedules are “indistinguishable” if for any protocol they exhibit the same output behavior, (ii) a set S of schedules is “knowable” if there is a protocol which “recognizes” it, in the sense that for some specified output symbol, the protocol pro- duces that symbol during an execution if and only if the execution proceeds according to some schedule from S. ∈ −
1.1 is proved by analyzing this topology. Our approach reveals and exploits a close analogy between the impossibility of wait-free k-set agreement and a lemma of Knaster, Xxxxxxxxxx, and Xxxxxxxxxxxx (KKM lemma) [1], which is equivalent to the fixed point theorem for the closed unit xxxx Xx in m-dimensional Euclidean space: if f is a continuous map from Bm to itself, then there exists a point x Bm such that f (x) = x. Very roughly, f corresponds to a distributed protocol Π, and the fixed point x corresponds to the schedule for which Π fails to solve the k-set agreement. The increase in difficulty of the k-set agreement proof in going from the case k = 1 to the case k > 1 corresponds to the increase in difficulty in going from the fixed point theorem for the interval [ 1, 1], which is very simple, to the theorem for balls in higher dimension, which, while elementary, is considerably harder. An additional obstacle in our work is that, while the topological structure of Bm is well understood, we must develop the topological structure for the set of schedules from scratch. While the explicit use of topology can be avoided, we have retained the topological structure of the proof, because this is what drove the proof and it provides important insight into what is going on. Our topological structure has an intuitive interpretation in terms of the information about an execution which is “public knowledge.” We believe that it will be worthwhile to explore the connection with the formal theory of distributed knowledge [14]. The inspiration for the topological approach came from Xxxxxxxxx’x work [10], in which the combinatorial properties of triangulations in Rk were used to obtain certain reductions among various decision problems. There is a considerable literature con- cerning topologies underlying computation in general [27] and distributed computing in particular [26]. The main b...
Main Results. The following extends the single-letter upper bound (36) in Proposition 3 to the multi-user case.
Main Results. The IEA TEM#96 gives a contribution to better understand the issues, the challenges and the opportunities related to the large amount of wind power capacity that is reaching its end-of- life. A community of experts from very different kind of organizations actively contributed to the discussion on the three sub-topics: decommissioning, repowering and recycling. In the General Framework, Decommissioning and Repowering session, it was underlined that most of the wind plants at the end-of-life will probably benefit of a life-extension, but in a medium to long term scenario the amount of repowering interventions is going to grow significantly. The repowering interventions offer many benefits such as: • Higher energy production in the same area • Better support from new turbines to the power grid • Environmental and visual impact: less turbines = less impacts • Areas already used for wind energy: better social acceptance • Reduction of national energy price • Increase of (temporary) jobs in the sector However, there are still many issues/challenges such as: • No specific regulation for repowering • Administrative matter: complex, unclear and long permitting • Grid connection: lack of available grid capacity for repowered plants • Transport to site of the new bigger wind turbines (especially in high complex terrain) • (New) environmental and landscape contraints • Difficult to allocate new incentives for RES and need of new mechanisms • Difficult to develop wind plant at grid parity • Dismantling and recycling of the components (turbine component - cables - foundations) The most of the capacity facing today the end-of-life is based onshore, however in few years the number of offshore wind plants reaching the end-of-life is going to grow very fast according to the boom of offshore installations in the last decade. Some specific challenges, such as dismantling the offshore foundations, will have to be considered in this case. Decommissioning and repowering of a great amount of wind power capacity means dealing with a high number of “waste” turbine components. The challenge of recycling these components, in particular the blades, has been discussed also in the vision of a circular economy. In particular, different recycling methods for composite materials have been presented underlining also the issue of the traceability in each step of the process. The breakout sessions identified research gaps and needs for future collaboration for each subtopic. Concerning recyc...
Main Results. Two parallel research lines based on large-scale GFET array TDM and FDM have been explored in order to increase the possibility of success at the end of the project. In general, TDM is more suitable for low-power systems, while FDM exhibits less sensitivity to GFET mismatching, channel crosstalk and CMOS flicker noise. • The first 1024-channel read-out ASIC for the TDM of 32×32 GFET sensor arrays (ASIC2) has been designed and fabricated in 1.8-V 0.18-µm 6-metal CMOS technology. Experimental validation has been carried out using a test vehicle chip (ASIC2T) in order to characterize the channel CMOS circuits and to develop an automated self-calibration methodology for inter-row circuit equalization. The ASIC2T has been also employed as an integrated tool for the automated calibration of GFET arrays, i.e. choosing the array gate voltage bias level, column 1-out-8 code of DC current offset cancellation and row PGA gain. Based on these optimum ASIC2T configurations, TDM readouts of 4×4 GFET sensor signals have been successfully obtained. • The first 1024-channel read-out ASIC for the FDM of 32×32 GFET sensor arrays (ASIC3) has been designed and fabricated in 1.8-V 0.18-µm 6-metal CMOS technology. Thanks to the physical and logical compatibility with ASIC2, the ASIC3 PCB modules will reuse most of the already tested ASIC2 headstage components (e.g. FPGA PCB module). • The first large-scale headstage has been developed based on a custom 1024-channel TDM ASIC2 PCB module in combination with a compact FPGA PCB module for the wired ASIC configuration and readout through the existing MCS HW/SW systems. By the stacking of multiple ASIC2 PCB modules, headstages for rodents and minipigs can be scaled up from 1024 to 10240 GFET recording sites.
Main Results j The algebraic Riccati equation can be written in the factorized form; let P¯j = H¯ F H¯j . The factor can be computed recursively, for j = 0, 1, ..., as H¯0 = P0 , H¯j = H¯j—1 D¯j ; D¯1 = L0 Cd0P¯0CF + r0 2 , D¯j = AD¯j—1 (167) d0 There, P0 is the solution of the algebraic Riccati equation for the unaugmented process model. Vector L0is the injection gain in 165 corresponding to P¯0. The recursive formula 167 corresponds to the Lyapunov difference equation run from P¯0. Vectors D¯j for j = 1, ..., nd max can be pre-computed off-line together with H¯0. The newly proposed representation of the state covariance matrix is as follows: + Dk+1|kDF |k k+1 There, the time-varying index i(k), also called ‘target delay’, is given by min {i(k — 1) + 1, nd max} if min J(k) > i(k — 1) or J(k) = {}.
Main Results. Consider a first order classical pseudodifferential operator A acting on columns
Main Results. Case 1. We begin with the restricted dataset [(xi, y¯i), i = 1, . . . , n] (2) where only the means of the Y variable are provided. The likelihood func- tion can be based on the marginal likelihood of xi, which is univariate normal N (µx, σ2), and conditional likelihood of y¯i, given xi, which is again univariate normal with conditional mean linear in xi and conditional variance independent of xi, namely, y¯i|xi ∼ N [µy + ρ resulting into the overall likelihood (xi − µx), σ2(1 − ρ2) ] (3) L(µx, µy, σx, σy, ρ|data) ∝ (σxσy)−n(1 − ρ2)−n/2 ∑1 exp [ − 2 (xi µx)2 − 2σ2(1 − ρ2) ∑i=1 mi(y¯i − µy − ρ σy (x − µ ))2