We propose a certainty-equivalence scheme for adaptive control of scalar linear systems subject to additive, i.i.d. Gaussian disturbances and bounded control input constraints, without requiring prior knowledge of the bounds of the system parameters, nor the control direction. Assuming that the system is at-worst marginally stable, mean square boundedness of the closed-loop system states is proven. Lastly, numerical examples are presented to illustrate our results.
– This paper considers controlled scalar systems relying on a lossy wireless feedback channel. In contrast with the existing literature, the focus is not on the system controller but on the wireless transmit power controller that is implemented at the system side for reporting the state to the controller. Such a problem may be of interest, e.g. , for the remote control of drones, where communication costs may have to be considered. Determining the power control policy that minimizes the combination of the dynamical system cost and the wireless transmission energy is shown to be a non-trivial optimization problem. It turns out that the recursive structure of the problem can be exploited to determine the optimal power control policy. As illustrated in the numerical performance analysis, in the scenario of a dynamics without perturbations, the optimal power control policy consists in decreasing the transmit power at the right pace. This allows a significant performance gain compared to conventional policies such as the full transmit power policy or the open-loop policy.
We study emulation-based state estimation for non-linear plants that communicate with a remote observer over a shared wireless network subject to packet losses. To reduce bandwidth usage, a stochastic communication protocol is employed to determine which node should be given access to the network. We describe the overall wireless system as a hybrid model, which allows us to capture the behaviour both between and at transmission instants, whilst covering network features such as random transmission instants, packet losses, and stochastic scheduling. Under this setting, we provide sufficient conditions on the transmission rate that guarantee an input-to-state stability property for the corresponding estimation error system. We illustrate our results with an example of Lipschitz non-linear plants.
This paper presents two schemes to jointly estimate parameters and states of discrete-time nonlinear systems in the presence of bounded disturbances and noise. The parameters are assumed to belong to a known compact set. Both schemes are based on sampling the parameter space and designing a state observer for each sample. A supervisor selects one of these observers at each time instant to produce the parameter and state estimates. In the first scheme, the parameter and state estimates are guaranteed to converge within a certain margin of their true values in finite time, assuming that a sufficiently large number of observers is used and a persistence of excitation condition is satisfied in addition to other observer design conditions. This convergence margin is constituted by a part that can be chosen arbitrarily small by the user and a part that is determined by the noise levels. The second scheme exploits the convergence properties of the parameter estimate to perform subsequent zoom-ins on the parameter subspace to achieve stricter margins for a given number of observers. The strengths of both schemes are demonstrated using a numerical example.
To investigate solutions of (near-)optimal control problems, we extend and exploit a notion of homogeneity recently proposed in the literature for discrete-time systems. Assuming the plant dynamics is homogeneous, we first derive a scaling property of its solutions along rays provided the sequence of inputs is suitably modified. We then consider homogeneous cost functions and reveal how the optimal value function scales along rays. This result can be used to construct (near-)optimal inputs on the whole state space by only solving the original problem on a given compact manifold of a smaller dimension. Compared to the related works of the literature, we impose no conditions on the homogeneity degrees. We demonstrate the strength of this new result by presenting a new approximate scheme for value iteration, which is one of the pillars of dynamic programming. The new algorithm provides guaranteed lower and upper estimates of the true value function at any iteration and has several appealing features in terms of reduced computation. A numerical case study is provided to illustrate the proposed algorithm.
We present an event-triggered observer design for linear time-invariant systems, where the measured output is sent to the observer only when a triggering condition is satisfied. We proceed by emulation and we first construct a continuous-time Luenberger observer. We then propose a dynamic rule to trigger transmissions, which only depends on the plant output and an auxiliary scalar state variable. The overall system is modeled as a hybrid system, for which a jump corresponds to an output transmission. We show that the proposed event-triggered observer guarantees global practical asymptotic stability for the estimation error dynamics. Moreover, under mild boundedness conditions on the plant state and its input, we prove that there exists a uniform strictly positive minimum inter-event time between any two consecutive transmissions, guaranteeing that the system does not exhibit Zeno solutions. Finally, the proposed approach is applied to a numerical case study of a lithium-ion battery.
Cooperative Adaptive Cruise Control (CACC) is a vehicular technology that allows groups of vehicles on the highway to form in closely-coupled automated platoons to increase highway capacity and safety. The underlying mechanism behind CACC is the use of Vehicle-to-Vehicle (V2V) wireless communication networks to transmit acceleration commands to adjacent vehicles in the platoon. However, the use of V2V networks leads to increased vulnerabilities against faults and cyberattacks. Here, we address the problem of increasing the robustness of CACC schemes against cyberattacks by using multiple V2V networks and a data fusion algorithm. The idea is to transmit acceleration commands multiple times through different communication channels to create redundancy at the receiver side. We propose a data fusion algorithm to estimate of the true acceleration command, and isolate compromised channels. Finally, we propose a robust $H_{\infty }$ controller that reduces the joint effect of fusion errors and sensor/channel noise in the platooning performance (tracking performance and string stability). Simulation results are presented to illustrate the performance of our approach.
Originating in the artificial intelligence literature, optimistic planning (OP) is an algorithm that generates near-optimal control inputs for generic nonlinear discrete-time systems whose input set is finite. This technique is, therefore, relevant for the near-optimal control of nonlinear switched systems for which the switching signal is the control, and no continuous input is present. However, OP exhibits several limitations, which prevent its desired application in a standard control engineering context, as it requires, for instance, that the stage cost takes values in <inline-formula><tex-math notation="LaTeX">$[0, 1]$</tex-math></inline-formula>, an unnatural prerequisite, and that the cost function is discounted. In this article, we modify OP to overcome these limitations, and we call the new algorithm <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula>. We then analyze <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> under general stabilizability and detectability assumptions on the system and the stage cost. New near-optimality and performance guarantees for <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> are derived, which have major advantages compared to those originally given for OP. We also prove that a system whose inputs are generated by <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> in a receding-horizon fashion exhibits stability properties. As a result, <inline-formula><tex-math notation="LaTeX">${\rm OP}_{\text{min}}$</tex-math></inline-formula> provides a new tool for the near-optimal, stable control of nonlinear switched discrete-time systems for generic cost functions.
We address the problem of robust state reconstruction for discrete-time nonlinear systems when the actuators and sensors are injected with (potentially unbounded) attack signals. Exploiting redundancy in sensors and actuators and using a bank of unknown input observers (UIOs), we propose an observer-based estimator capable of providing asymptotic estimates of the system state and attack signals under the condition that the numbers of sensors and actuators under attack are sufficiently small. Using the proposed estimator, we provide methods for isolating the compromised actuators and sensors. Numerical examples are provided to demonstrate the effectiveness of our methods.
We introduce a sequential learning algorithm to address a robust controller tuning problem, which in effect, finds (with high probability) a candidate solution satisfying the internal performance constraint to a chance-constrained program which has black-box functions. The algorithm leverages ideas from the areas of randomised algorithms and ordinal optimisation, and also draws comparisons with the scenario approach; these have all been previously applied to finding approximate solutions for difficult design problems. By exploiting statistical correlations through black-box sampling, we formally prove that our algorithm yields a controller meeting the prescribed probabilistic performance specification. Additionally, we characterise the computational requirement of the algorithm with a probabilistic lower bound on the algorithm's stopping time. To validate our work, the algorithm is then demonstrated for tuning model predictive controllers on a diesel engine air-path across a fleet of vehicles. The algorithm successfully tuned a single controller to meet a desired tracking error performance, even in the presence of the plant uncertainty inherent across the fleet. Moreover, the algorithm was shown to exhibit a sample complexity comparable to the scenario approach.
In this article, we study the problem of minimizing the sum of potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication in the network is not coordinated. We study the behavior of an asynchronous algorithm based on dual decomposition and block coordinate subgradient methods under assumptions weaker than those used in the literature. At the same time, we allow different agents to use local stepsizes with no global coordination. Sufficient conditions are provided for almost sure convergence to the solution of the optimization problem. Under additional assumptions, we establish a sublinear convergence rate that, in turn, can be strengthened to the linear convergence rate if the problem is strongly convex and has Lipschitz gradients. We also extend available results in the literature by allowing multiple and potentially overlapping blocks to be updated at the same time with nonuniform and potentially time-varying probabilities assigned to different blocks. A numerical example is provided to illustrate the effectiveness of the algorithm.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više