-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
updating lake model mccall function to be consistent with mccall lecture #176
base: master
Are you sure you want to change the base?
Conversation
Thanks @Harveyt47 , that looks good to me. |
@shlff , are you able to review this pull request? Does it look OK to you? |
Thanks @jstac . For the code in QE lectures, as you discussed with me before, we need to make a balance between its efficiency and readability. I will give my comments on these changes in the following. (Since this PR is quite old, I will briefly explain the changes in code before comments.) Let's get started. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jstac . Please find my reviews here.
@@ -63,7 +63,7 @@ Let's start with some imports: | |||
from scipy.stats import norm | |||
from scipy.optimize import brentq | |||
from quantecon.distributions import BetaBinomial | |||
from numba import jit | |||
from numba import jitclass, float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it Change1.
It serves for later newly-adding speedup for the McCallModel
class.
Whether we need Change1 depends on whether we want to speed up the McCallModel
class with jitclass
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For jitclass
, we should use from numba.experimental import jitclass
instead.
@@ -644,7 +644,7 @@ Let's plot the path of the sample averages over 5,000 periods | |||
xbar = lm.rate_steady_state() | |||
|
|||
fig, axes = plt.subplots(2, 1, figsize=(10, 8)) | |||
s_path = mc.simulate(T, init=1) | |||
s_path = mc.simulate(T, init=1, random_state=1234) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it Change2.
It intends to fix the initial random state of random number generator.
From my point of view, it is an unnecessary change for two reasons.
- The most important one is that we are not aiming to reproduce the figure. So producing figures with samples won't affect our analysis in the lecture.
- Then, adding Change2 might depreciate readability a bit.
n = 60 # n possible outcomes for w | ||
w_default = np.linspace(10, 20, n) # wages between 10 and 20 | ||
a, b = 600, 400 # shape parameters | ||
dist = BetaBinomial(n-1, a, b) | ||
p_default = dist.pdf() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it Change3.
It shifts the default wage vector and probabilities outside the class McCallModel
.
I think this change is unnecessary since it makes the lecture reading process a bit inconsistency, at least for me. It looks unexpected here. Also, it depreciates both the efficiency and readability of the code.
mccall_data = [ | ||
('α', float64), # job separation rate | ||
('β', float64), # discount factor | ||
('γ', float64), # Job offer rate | ||
('c', float64), # unemployment compensation | ||
('σ', float64), # Utility parameter | ||
('w_vec', float64[:]), # Possible wage values | ||
('p_vec', float64[:]) # Probabilities over w_vec | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it Change4.
For Change4-6, they did three main things:
- Change4-5: packing the default utility function and the Bellman updating function into the class
McCallModel
and speed up the class with classifying parameter's categories andjitclass
, - Change6: put function
solve_mccall_model
into a different code cell and clarify it, - Change6: modify the corresponding parts in the following code to accommodate the new
McCallModel
class
My comments:
- For me, point 2 mentioned above is a relatively good change, which makes the code more readable while keeping its efficiency. But this clarification seems a bit redundant since we have already added comment
Iterates to convergence on the Bellman equations
in the function. We can actually get rid of this change. - Point 1 seems a bit unnecessary since it makes the code harder to read, though it might enhance its efficiency a bit.
- Point 3 depends on the adoption of Point 2.
My suggestion:
- get rid of point 2,
- whether we should adopt point 1 depends on our aims: readability or efficiency. If we want more efficiency at the cost of the readability, then keep the change. If we want it more readable, then get rid of it.
- whether we should to adopt point3 depends on whether we want to adopt point 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave a serious thought to the McCall
class added by @Harveyt47 after my last comments.
I think it is a good idea to make the class
consistent here, but we should modify the class a bit to capture all features from the origin code.
@jitclass(mccall_data) | ||
class McCallModel: | ||
""" | ||
Stores the parameters and functions associated with a given model. | ||
""" | ||
|
||
def __init__(self, | ||
α=0.2, # Job separation rate | ||
β=0.98, # Discount rate | ||
γ=0.7, # Job offer rate | ||
c=6.0, # Unemployment compensation | ||
σ=2.0, # Utility parameter | ||
w_vec=None, # Possible wage values | ||
p_vec=None): # Probabilities over w_vec | ||
def __init__(self, | ||
α=0.2, | ||
β=0.98, | ||
γ=1.0, | ||
c=6.0, | ||
σ=2.0, | ||
w_vec=w_default, | ||
p_vec=p_default): | ||
|
||
self.α, self.β, self.γ, self.c = α, β, γ, c | ||
self.σ = σ | ||
self.w_vec = w_vec | ||
self.p_vec = p_vec | ||
|
||
# Add a default wage vector and probabilities over the vector using | ||
# the beta-binomial distribution | ||
if w_vec is None: | ||
n = 60 # Number of possible outcomes for wage | ||
# Wages between 10 and 20 | ||
self.w_vec = np.linspace(10, 20, n) | ||
a, b = 600, 400 # Shape parameters | ||
dist = BetaBinomial(n-1, a, b) | ||
self.p_vec = dist.pdf() | ||
|
||
def u(self, c): | ||
σ = self.σ | ||
if c > 0: | ||
return (c**(1 - σ) - 1) / (1 - σ) | ||
else: | ||
self.w_vec = w_vec | ||
self.p_vec = p_vec | ||
|
||
@jit | ||
def _update_bellman(α, β, γ, c, σ, w_vec, p_vec, V, V_new, U): | ||
""" | ||
A jitted function to update the Bellman equations. Note that V_new is | ||
modified in place (i.e, modified by this function). The new value of U | ||
is returned. | ||
return -10e6 | ||
|
||
def update_bellman(self, V, V_new, U): | ||
α, β, γ, c = self.α, self.β, self.γ, self.c | ||
w_vec, p_vec, u = self.w_vec, self.p_vec, self.u | ||
|
||
|
||
for w_idx, w in enumerate(w_vec): | ||
# w_idx indexes the vector of possible wages | ||
V_new[w_idx] = u(w) + β * ((1 - α) * V[w_idx] + α * U) | ||
|
||
""" | ||
for w_idx, w in enumerate(w_vec): | ||
# w_idx indexes the vector of possible wages | ||
V_new[w_idx] = u(w, σ) + β * ((1 - α) * V[w_idx] + α * U) | ||
U_new = u(c) + β * (1 - γ) * U + \ | ||
β * γ * np.sum(np.maximum(U, V) * p_vec) | ||
|
||
U_new = u(c, σ) + β * (1 - γ) * U + \ | ||
β * γ * np.sum(np.maximum(U, V) * p_vec) | ||
return U_new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it Change5.
|
||
return U_new | ||
We will create a function ``solve_mccall_model`` to iterate with the Bellman operator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call the remaining Change6.
Hi @jstac
I update the McCall function in the lake model to be consistent with the McCall lecture. The same changes I make in this notebook
I also add a
random_state=seed
in the markov chain simulation