Info:
Monte Carlo methods are an awesome class of methods that approach problems with no prior knowledge of the environment. Often called the model-free approach.
A lot of times they can be used as numerical approximations of something that cannot be solved exactly such as an mathematical equation or model because it can be complex and non-trivial. But somehow… Using brute force and the Law-of-Large Numbers we can get close to an approximate estimate and this can be fine for most problems.
A Monte Carlo method is any method that uses randomness to solve problems. The algorithm repeats suitable random sampling and observes the fraction of samples that obey particular properties in order to make numerical estimations. [6.1].
Yuxi (Hayden) Liu,
PyTorch 1.x Reinforcement Learning Cookbook
Example:
A great example is to use Monte Carlo to calculate the humble π.
Givens:
- Assume a square has an area of 4.
- Area = Length * Width
- Length = 2
- Width = 2
- Centered at origin
- Range [-1<=x<=1 and -1<=y<=1]
- Assume a circle inside the square and our target is π.
- Area = π * radius^2
- radius = 1
- Range [-1<=x<=1 and -1<=y<=1]
- Calculate Area Square
- Area = Length * Width = 2 * 2
- Area = 4
- Calculate Area Circle
- Area = π * radius^2 = π * 1^2
- Area = π

Derive:
- Given what we know above we can do some tricks and arrive to the elusive π estimation.
- Ratio = Area Circle / Area Square
- Ratio = (π * radius^2) / (Length * Width)
- Ratio = π / 4
- Step (1) is only 1/4 of the problem. We need to multiply the Ratio by 4 to give π .
- π = Ratio * 4
- π = (π / 4) * 4
Implement:
| # Import libs | |
| import numpy | |
| import torch | |
| import math | |
| import matplotlib.pyplot as plt | |
| # N sample points | |
| n_point_square = [10**3, 10**4, 10**5, 10**6] | |
| # Loop Thru | |
| for num_pts_sq in n_point_square: | |
| # Create 1,000-1,000,000 random points for square -1<=x<=1 and -1<=y<=1 | |
| points_square = torch.rand((num_pts_sq, 2)) * 2 -1 | |
| # Init points inside circle and list | |
| n_point_cicrle = 0 | |
| points_circle = [] | |
| # For each points from square landing inside circle of radius 1 | |
| # save point to list and increment points inside circle | |
| for points in points_square: | |
| r = torch.sqrt(points[0]**2 + points[1]**2) | |
| if r <= 1.0: | |
| points_circle.append(points) | |
| n_point_cicrle += 1 | |
| # Circle points to tensor | |
| points_circle = torch.stack(points_circle) | |
| # Plot the random points inside and outside circle | |
| plt.plot(points_square[:, 0].numpy(), points_square[:, 1].numpy(), 'y.') | |
| plt.plot(points_circle[:, 0].numpy(), points_circle[:, 1].numpy(), 'g.') | |
| # Draw circle and square | |
| i = torch.linspace(0, 2 * math.pi) | |
| plt.plot(torch.cos(i).numpy(), torch.sin(i).numpy()) | |
| plt.plot([-1, -1, 1, 1, -1], [-1, 1, 1, -1, -1], 'r') | |
| plt.axes().set_aspect('equal') | |
| plt.title('Monte Carlo π\nN = %d' % num_pts_sq) | |
| plt.show() | |
| # Calc value of pi | |
| pi_est = 4 * (n_point_cicrle / num_pts_sq) | |
| print('Est. val of pi is: %1.8f @ N = %d' % (pi_est, num_pts_sq)) | |
| # Estimation Vs Iteration up to 10,000 | |
| n_point_cicrle = 0 | |
| pi_iteration = [] | |
| for i in range(1, n_point_square[1] + 1): | |
| point = torch.rand(2) * 2 -1 | |
| r = torch.sqrt(point[0]**2 + point[1]**2) | |
| if r <= 1.0: | |
| n_point_cicrle += 1 | |
| pi_iteration.append(4 * (n_point_cicrle / i)) | |
| plt.plot(pi_iteration) | |
| plt.plot([math.pi] * n_point_square[1], '–') | |
| plt.xlabel('Iteration') | |
| plt.ylabel('Estimated π') | |
| plt.title('Estimation History') | |
| plt.show() |
Results:





Notice that around the 4,000 iteration π converges pretty good.
References:
- [6.1] Yuxi (Hayden) Liu, PyTorch 1.x Reinforcement Learning Cookbook, Packt, 2019
- GitHub link: https://github.com/csalinasonline/PythonProjectScripts/tree/master/Project_1_S
- Medium Post link: https://medium.com/@csalinasonline/monte-carlo-methods-calculating-pi-5cb807fcf122