# Very simple particle filters algorithm (sequential monte carlo method) implementation

 0 I'm interested in the simple algorithm for particles filter shown in the algorithm above. It seems very simple but I have no idea on how to do it practically. Edit: After the example and explanation given by @smjohns , I've implemented the simple example of two states according to what he said, just to better understand how it works. Can you, please check this simple C++ implementation that I made from his example, then tell me if I understood it correctly. If it is the case, I will add some more questions about particle filters algorithm. here is the code: http://pastebin.com/M1q1HcN4 Note, the algorithm from page 11 of www.cs.ubc.ca/~arnaud/doucet_defreitas_gordon_smcbookintro.ps asked 07 May '12, 09:26 ShN 41●1●11

 1 OK let's extend our two state model: We attempt to track the position of (let's say) a robot across two states using four particles. States: A, B (so x_t^i = A means that particle i is at position A at time t) Step 1: Model: P(A 0) = 0.5, P(B 0) = 0.5 (The robot starts out as equally likely to be in either state.) P(A _t|B _t-1) = 1/3, P(B _t|A _t-1) = 2/3 (This is the motion model. Any object has probability of moving to or staying in B of 2/3; any object has probability of moving to or staying in A of 1/3.) P(y _t = s|x _t = s) = 4/5 (This is the measurement model. If an object is in state s (A or B), the probability that our observation is accurate is 80%.) We create four particles; each particle is equally likely to be in A or B. Step 2: Sample: For each particle: P(A1) = P(A1|A0)P(A0) + P(A1|B0)P(B0) = (1/3)(1/2) + (1/3)(1/2) = 1/3. P(B1) = 2/3. In our example: let's say we end up with 2 particles in A, two particles in B. Weights: In our example: Assume that we observe A. The two particles in A each have unnormalized weight 4/5 (P(y1 = A|x1 = A)). The two particles in B have unnormalized weight 1/5 (P(y1 = A|x1 = B) = 1/5). So our particles have weights A: 4/5, 4/5 B: 1/5, 1/5. These weights add up to two, so we normalize by dividing through by 2: A: 2/5, 2/5 B: 1/10, 1/10. Step 3: Now we resample our four particles. Every time, there's an 80% chance it will be A (2/5 + 2/5) and a 20% chance it will be B (1/10 + 1/10). In our example: let's say we end up with 3 particles in A, 1 particle in B. Repeat steps 2 and 3 until you have the robot localized to your satisfaction (all particles more or less track the robot, within error.) answered 07 May '12, 11:39 smjohns 3.7k●14●45 If you understand this, then it's likely clear that more particles are better - too much chance for observational error with only four particles. In other words, it's unlikely that all four particles are going to move around just like the robot between states. (07 May '12, 11:40) smjohns I understand better now with this example. I'll implement this simple example and edit my first post to ask some another related questions after. Thanks. (09 May '12, 13:22) ShN @smjohns I've edited my first post to add a simple C++ implementation of the simple example that you gave here. Can you check if I understood it well ? Thanks. (10 May '12, 06:51) ShN
 0 Two-state system: in state A, there are 3 particles, each with weight 0.1. In state B, there is 1 particle, with weight 0.5). So if I resample my particles (choose a new distribution of my four particles based on existing weighting), then I repeat the following four times: Choose a new particle. I choose a particle in state B with probability 5/8, and a particle in state A with probability 3/8. Where do these numbers come from? P(sample in state s) = (# particles in S * weight of particles in S) / sum over states S' (# particles in S' * weight of particles in S'). So Prob(A) = (3*0.1)/(3 * 0.1 + 1 * 0.5) = 0.3/0.8 = 3/8; Prob(B) = 0.5/0.8 = 5/8. So you are likely to have more particles in B than A, after resampling. You are unlikely to have all particles in state A after resampling; you expect this to happen with probability 0.3 * 0.3 * 0.3 * 0.3 = 0.081 = 8.1% of the time. answered 07 May '12, 09:41 smjohns 3.7k●14●45 If we compare this example to the algorithm in the link that I gave, then what is the data y_t ? The two states A and B are represented by what is the algorithm ? (07 May '12, 10:39) ShN I can't open that file; someone else will have to take that on. (07 May '12, 10:45) smjohns Well, I've added the algorithm to my first post. Can check it please. (07 May '12, 11:01) ShN @smjohns I see that at any given time t, the probability P(x^i_t = A) is the same for all particles generated at this time, is it correct ? Is it because of the initial distribution P(x^i_0 = A) = P(x^i_0 = B) = 0.5 ? (14 May '12, 09:08) ShN No. The probability of a particle being in state s (e.g., s = A) after resampling (aka step 3)depends on the weighting of the particles before resampling. In the example above, those weights are 3/8 for A and 5/8 for B on that particular step. In the original, full example, step 3 resampled with 0.8 weight on A and 0.2 on B. The next time you resample, the weights will probably be different, which will give different probabilities. It's worthwhile to note that all of the observations and all of the movements of the original object to date are baked into these calculations. Hope that helps. (14 May '12, 09:44) smjohns
 0 Not sure about the paper but I coded a Kalman filter on jsfiddle for cs 373 that has a pretty little implementation. The actual kalman update is only about 7 lines. The rest is setup and interface code. If you are going to use this example, be careful, I add a decay step that makes the Kalman filter forget the past. This allows it to do fun things like change directions, but it is not part of the kalman definition. // prediction x = F.x(x).add(u); P = F.x(P).x(F.transpose()); // measurement update Z = \$M([[xMeasure, yMeasure]]); y = Z.transpose().subtract(H.x(x)); S = H.x(P).x(H.transpose()).add(R); K = P.x(H.transpose()).x(S.inverse()); x = x.add(K.x(y)); P = I.subtract(K.x(H)).x(P);  answered 07 May '12, 11:06 Ben Haley 1.6k●1●13 Well, I want to implement particle filters alown from scrach just to understand it. I've edited my first post to add a simple C++ implementation of the example that @smjohns gave before. Can you please check if I understood it well, or there are some misunderstanding ? (10 May '12, 06:54) ShN I don't know C++, so I can't guarantee for you that your code is working properly. I did take a look... I didn't notice anything glaring, other than you coded for this single example rather than coding it more generally and using this as an example. I know this probably isn't as helpful as you hoped. C++ isn't my cup of tea. (10 May '12, 09:38) smjohns @ShN sorry I don't have experience with C++. But I think the best way to test a kalman fiddle is to build a simple UI and determine if it performs as it should. That is does it track a particle with constant velocity simply. My jsfiddle code has an example of that. It didn't take too long to code, but it was harder than the kalman filter itself. I don't know how hard it would be in C++. (10 May '12, 09:59) Ben Haley When I execute my C++ program with N = 50 (number of particles), I can see that at each time t: there is (among the N particles generated at this time) much more particles having a state which correspond exactly to the observation Y_t. Does this means that my algorithm performs well ? (10 May '12, 10:16) ShN Do'h I've been showing off my kalman filter code. I mixed it up with particle filters. Sorry about that. To test your algorithm check how well your particles correspond with the true position of the robot. Your observations should include artificial noise and your particles should filter out that noise and center on the true position. This correspondance should grow stronger as more observations are made. (10 May '12, 10:33) Ben Haley How do you include some artificial noise to the observations, since in this example each observation data (Y_t) is just a state which can be 'A' or 'B' ? And by the way, the context of the original problem that I want to deal with, is related to sequential data clustering, not robot moving. So I still need to understand how all this stuff can apply for that context. (10 May '12, 11:04) ShN The probability P(y|x) is measurement noise. I used a simple model; your observation has an 80% chance of being correct. If nearby states are distributed in some kind of space, then you could apply gaussian noise (or whatever distribution makes sense in your problem). (10 May '12, 12:03) smjohns @smjohns If you know C++, can you check the implementation I've made with your example ? I see that at any given time t, the probability P(x^i_t = A) is the same for all particles i generate at this time, is this normal ? (10 May '12, 13:20) ShN
 0 You may find this post useful: http://www.aiqus.com/questions/25074/85-did-i-actually-get-particle-filters-right/25635 answered 08 May '12, 08:33 XavierP 1.9k●3●20 This link is also useful, thanks. By the way, I've edited my first post to add a simple C++ implementation of the example that @smjohns gave before. Can you please check if I understood it well, or there are some misunderstanding ? (10 May '12, 06:55) ShN Sorry, I can't spare the time to look at code... Test it, make an object that uses it to locate itself. By the way here's a theoretical explanation: http://www.aiqus.com/questions/22778/how-i-attempted-the-measurement-question-using-bayes-rule/24133 There are lots of useful posts back from AI-class. (10 May '12, 16:37) XavierP Well, can you just tell me if it is normal that: at any given time t, the probability P(x^i_t = A) is the same for all particles (i=1...N) generated at this time t. (11 May '12, 10:51) ShN
 toggle preview community wiki