Jump to content

Talk:Extended Kalman filter

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Robust Extended Kalman Filters

[edit]

There are any number of numerical improvements for Extended Kalman Filters. I am unsure if we should include one in specific, by that I mean the robust extended Kalman Filter. Anyone have any thoughts on this?Zoratao (talk) 01:31, 14 February 2013 (UTC)[reply]

Untitled

[edit]

Why would the EKF be "considered the de facto standard in the theory of nonlinear state estimation" if the Unscented Kalman Filter is "an improvement to the extended Kalman filter"? (Both quotes are from the article.) Is UKF considered to be a subset of EKF, rather than a separate algorithm? Perhaps the article could clarify. --76.27.96.159 (talk) 03:23, 19 February 2009 (UTC)[reply]

Because, (still) in engineering practice, the EKF used everywhere, though systems using the UKF are starting to become more common in academic literature (at least as far as GPS/Inertial and Computer Vision literature is concerned). The UKF is an improved estimator over the EKF (since it handles nonlinearities better), but relies on a different heuristic: It's better (more accurate) to approximate the the noise distribution as Gaussian than it is to approximate an arbitrary nonlinear function as linear for some small interval. For reference, see Wan and van der Merwe, "The Unscented Kalman Filter for Nonlinear Estimation", IEEE AS-SPCC, 2000.

Damien d (talk) 07:00, 4 August 2010 (UTC)[reply]

Perhaps it should be noted the that EKF has a historical market share and is currently the most commonly applied filter, since it has approx. a 30 year head start on the UKF. It would need to be clarified what is meant by "an improvement to the extended Kalman filter" because in optimization what is meant by "better" or "improvement" is technical so in what sense an improvement? There are some noted limitation to the UKF since it represents the transition as a Gaussian, this is noted in ^van der Merwe et el. Secondly what the paper says is "[the EKF] can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter." There are several problems with this statement but I will let them go for now, the point being it is certainly not clear in what cases the UKF outperforms the EKF and why. — Preceding unsigned comment added by Zoratao (talkcontribs) 21:42, 26 December 2011 (UTC)[reply]

The underlying reason the the EKF is the defacto standard is that the difficulty in all nonlinear Kalman type filters is the process of linearization. Julier and Uhlmann's filter is a good one, however they have said too much and many of the things they publish have been disputed. Firstly the UKF is a special example of something called a linear regression kalman filter, though they rebutt that so is a Particle Filter which predates the Kalman Filter. The EKF relies on Taylor Series to perform linearization and the UKF relies on linear regression, the main difference between the UKF and the LRKF is that the LRKF selects samples about the mean randomly and the UKF uses the so called Unscented Transform which picks points based on the linearly independant vectors of the matrix square root. This is nice and helps improve the performance of the sampling algorithm. It doesn't really dethrone the ekf though. (Thats just my opinion.)69.151.50.154 (talk) 17:49, 13 February 2013 (UTC)[reply]

User: An Engineer that knows better

[edit]

The non-linear eqs. of the process expressed by f lead to a Jacobian F = df/dx. But the story does not end here. The linear equations do not depend on F, but rather on e^Ft, which can be approximated (1st order) by Phi=(I + F*Dt), called the state transition matrix (STM). Only at this point can you start to work on the variance equations, which are always dealt with on the linear domain (part of the KF limitations I'm afraid, reason why unscented was invented). Any way, this means that you shall never see F in the covariance equations (or at leat without a Dt multiplying it). Put it this way: This article is wrong, VERY wrong. WAY OFF! Get your facts straight Wikipedia. — Preceding unsigned comment added by 85.240.68.171 (talk) 01:01, 29 January 2014 (UTC)[reply]

I'm afraid you don't know better. It's called Bayes theorem and measurement update. If you have an exact linear system then you are correct. That is a solution to the diff-e-q: , where A is either scalar or matrix. Where you are wrong: 1. we are not dealing with 1st order odes. 2. Even if we were we would be using a measurement update solution which doesn't have an analytic solution, because you know you can't blindly recurse ad-infinitum as you can with a system with no measurement.§§ — Preceding unsigned comment added by Zoratao (talkcontribs) 18:33, 11 March 2014 (UTC)[reply]

Zoratao. Sadly, I do know better :). The Extended Kalman filter is a hack, where we linearize a non-linear system to get that "exact linear system" and treat it as a regular ODE. It follows from the linearization of non-linear systems that the linearized system's validity is connected to a point in time, hence no analytical solution is valid as you drift away in time from that linearization point. That also answers your 2nd point. Nothing of what I'm saying here is either new or complicated, for anyone versed in Kalman filters anyway. That's why I find your comment a bit disconcerting... Don't take it from me, go ahead and verify it yourself: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002052.html Ok. Let's take a look at it from the physical point of view. Take x to be a scalar position in meters [m]. Then, dx/dt = f(x,...) is a velocity in [m/s]. Consequently, we have the jacobian F = df/dx expressed in [1/s]. Also, the variance associated with the estimation error, P, is in square meters [m^2]. Now, according to wikipedia "as is", P = F*P*F' + Q, which means that [m2] = [1/s]*[m2]*[1/s] + [m2]... Do you follow? It's not P = F*P*F' + Q, but rather P = Phi*P*Phi', and Phi is a non-dimensional matrix, otherwise the equation violates the physics behind it. I sometimes supervise university students that use wikipedia as a source for reports and algorithms. That's how I found this gross error. Since then, I have changed my mind about wikipedia. Use it for trivia, sure, but never trust it for knowledge that may influence your own future. Zoratao, you're not helping. — Preceding unsigned comment added by 85.242.240.178 (talk) 23:43, 13 April 2014 (UTC)[reply]

Consistency check

[edit]

I believe there are some issues with this article, although this is not my area of expertise. The predicted covariance estimate on this page uses , but the predicted covariance estimate on https://en.wikipedia.org/wiki/Kalman_filter it uses . I wanted to see if this was an inconsistency.


Also I agree with the above post about "The linear equations do not depend on F, but rather on e^Ft, which can be approximated (1st order) by Phi=(I + F*Dt)." Although I'm not the expert and we both could be wrong.Mouse7mouse9 21:55, 12 February 2014 (UTC)


To help you out is noise and presumed to be independent and identically distributed. The index is unimportant and changes text to text. Its only a matter of preference in this case. However, the indices of the non-noise terms are important. However those t0o can change, but only by translation. — Preceding unsigned comment added by Zoratao (talkcontribs) 18:42, 11 March 2014 (UTC)[reply]

I recommend editing the to just be . I recognize (now that I have read three different text books on the Kalman filters) that there it is really a matter of tastes. But if first observed by a novice (as I was when I first open the page about 2 months ago), the difference between the two encourages the novice to search for a reason why the two are different. Sailby9 (talk) 18:32, 15 December 2015 (UTC)[reply]


Also, I do not believe that should ever be present in any section of this article. , unless I am very much mistaken, is the control input. It is known for every instance of . Therefore, it can be referenced directly and used to make the prediction. This is how the algorithm is presented in Probabilisitic Robotics, by Subastian Thrun, Wolfram Burgard, and Dieter Fox (Chapter 3, section 3).Sailby9 (talk) 14:59, 16 December 2015 (UTC)[reply]

Doubtful

[edit]

The section which contains the calculations for the predict and update phase, seems wrong.

You are calculation the P matrix for time k based on time k-1, using the F matrix at time k. But you cannot calculate the F matrix at time k, until you have the state xk|k, which is the last step in the process.Lathamibird (talk) 14:19, 7 January 2016 (UTC)[reply]

Notation y and z

[edit]

FYI I reverted your recent edit. The notation didn't match the other articles and didn't make sense as-is. -Roger (talk) 12:44, 6 April 2018 (UTC)[reply]

I wondered why there is when there is no . When adding a diacritic to a variable there should be a plain one. is the estimate of . I have seen as the estimate of the measurement of the state is . Then is the difference between the actual value and the estimated one. Here is used for the measurement, so then I put as the difference. --Per W (talk) 12:12, 25 April 2018 (UTC)[reply]