Saturday, December 3, 2016

Objects with Functions in Python

I've learned the hard way that using classes makes programming much easier to work with. I had developed a habit of not creating classes when I make a function. While this can be useful for a simple program, I ran into difficulties when I would attempt to import functions from other py files. Building classes makes this process much easier.

A typical example of a class from an introductory book does not make clear the uses of classes and objects instantiated from them. Consider this Cat class that I have made:

# cat class

class Cat():
    def __init__(self, name, color):
        self.name = name
        self.color = color
    def helloKitty(self):
        print("My name is " + self.name + ". I am a " + self.color + " cat.")
    
    def petCat(self):
        print(self.name + " purred. This cat likes when you pet it.")

cat = Cat(name = "Milo", color = "orange")

cat.helloKitty()
cat.petCat()

Running this will return:

runfile('C:/Users/James/Google Drive/Python Scripts/Lessons/CatObject.py', wdir='C:/Users/James/Google Drive/Python Scripts/Lessons')
My name is Milo. I am an orange cat.
Milo purred. This cat likes when you pet it.

What do we get from this? Well, the basic pieces are present. We have a class, which includes all indented text beneath the text, "class Cat():". We have two objects that we pass when we call the function, defined in "def __init__(self, name, color):" These will be used when we call the functions "helloKitty()" and "petCat()". To call these functions, it is best if we first instantiate a Cat object. I have called this object, "cat". I call the functions owned  by "cat" by adding a "." and the function name. Thus, cat.helloKitty() and cat.petCat() call the two lines you see above.

Classes like Cat are boring. It doesn't help me to internalize the significance of objects and functions. I have made a class, MathOperations, for this purpose. MathOperations will let you sum, multiply, raise a base to an exponent and multiply a base by 10 to the specified exponent:


# Math Functions

class MathOperations():
    def __init__(self):
        self.num = 0
        
    def add(self, a, b):
        self.num = a + b 
        return self.num
    
    def multiply(self, a, b):
        self.num = a * b
        return self.num

    def power(self, a, b):
        self.num = a ** b
        return self.num
    
    def exp(self, a, b):
        self.num = a * 10 ** b
        return self.num

math = MathOperations()

m = 8
n = 4

add = math.add(m, n)
multiply = math.multiply(m, n)
power = math.power(m, n)
exp = math.exp(m, n)

names = ['add', 'multiply', 'power', 'exp']
array = [add, multiply, power, exp]

for i in range(len(array)):
    print(names[i], str(m), str(n), '\n', str(array[i]))

At the bottom of this file, I create a MathOperations() object and use its function with numbers 8 and 4 (in that order). I want to show what values these numbers yield in conjunction with the functions and organize the results. Final results return:

runfile('C:/Users/James/Google Drive/Python Scripts/Lessons/MathFunctionLesson.py', wdir='C:/Users/James/Google Drive/Python Scripts/Lessons')
add 8 4 
 12
multiply 8 4 
 32
power 8 4 
 4096
exp 8 4 
 80000

Alternately, I can achieve the same output by importing MathOperations from a different file:


from MathFunctionLesson import *


math = MathOperations()

m = 8
n = 4

add = math.add(m, n)
multiply = math.multiply(m, n)
power = math.power(m, n)
exp = math.exp(m, n)

names = ['add', 'multiply', 'power', 'exp']
array = [add, multiply, power, exp]

for i in range(len(array)):
    print(names[i], str(m), str(n), '\n', str(array[i]))


This barely scratches the surface of the topic, but it is more interesting and memorable than Cat.

Friday, November 18, 2016

HeatSpace for NetLogo

I have put up a beta version of heatspace. This program generates visualizations for Netlogo's BehaviorSpace. Find the program and manual here:
NetLogo is an agent-based modeling platform whose ease of use and functionality are hard to beat. The goal of this manual is to succinctly show how to generate visualizations that present significant portions of a model's parameter space. Those who are not accustomed to agent-based modeling may expect visualizations to come in the form of line plots. Line plots are an easy way to convey a limited amount of information to a user, but it is difficult to convey more than a few lines without confusing the observer. Instead of representing the value of an output spatially, heatmaps represent value using color.

When we visualize the parameter space, it is not enough to represent this space only using a single run for each combination of parameters. Each combination must be mapped using a substantial number of runs. Usually 20 runs may produce a large enough sample to faithfully represent the parameter space. More is better. If generating data from model of interest is not a high cost endeavor, use more runs.

HeatSpace averages the values generated at each time period during a run. These values can then mapped onto two axes whose values are fixed throughout a run . A single map can be generated for each time period of a run, creating series of frames analogous to a movie. Alternately, the x-axis of a heatmap can represent time so that the change in the value of an output in light of a change in either the x or y parameter value at period t can be represented.


Tuesday, November 15, 2016

JamesLCaton.com

Check out my personal website:
I am a graduate lecturer and candidate for a Ph.D. in economics at George Mason University. I have been recipient of the F. A. Hayek Fellowship from Mercatus, the I.H.S. Humane Studies Fellowship, and their Summer Research Fellowship. I hold an M.A. in Economics from San Jose State University where I was a participant in the Student Faculty Partnership and received the Award for Excellence in Economics. I have co-edited Macroeconomics, a 2 volume set which is a collection of essays and primary sources that represent the core of macroeconomic thought. I have also published articles in the Review of Austrian Economics and Advances in Austrian Economics and published book reviews for EH.netThe Journal of Markets and Morality and History: Review of New Books.

Wednesday, November 2, 2016

Ex Ante, Ex Post: Making Sense of Rational Expectations and the Efficient Market Hypothesis

The move from a predominantly Keynesian paradigm in macroeconomics to the success of monetarist and the new classical macroeconomics that followed represents a shift from a general skepticism of markets within economics to a belief in market efficiency. The most extreme version of this is represented by real business cycle theorists who model the macroeconomy without using an upward sloping short-run aggregate supply curve. (This is actually how I prefer to teach disequilibrium effects associated with changes in aggregate demand.) The emphasis of these models is equilibrium which is, on average, reached in the economic system these models are built on. The results obtained from these models have been successful. In the long-run, the quantity theory holds true. In the long-run, the economy tends to grow at a steady rate, affected mostly by impediments to production and exchange driven by policy. What could be wrong about a field that has generated a tremendous amount of explanatory power?

John Maynard Keynes objected to long-run analysis of the classical economists, meaning most economists who preceded him. This included many of his contemporaries. He claimed that the agents of economic theory were assumed to have higher quality knowledge and decision-making abilities than they actually had. In a sense, Keynes was correct, but he overstated his case. "In the long-run, we're all dead" is a catchy slogan. It is also an abuse of economic ontology. The efficacy of the assumption of rational expectations and of the efficient market hypothesis depend on this distinction between long-run and short-run. Given time for adjustment and a lack of external perturbations, we expect markets will reach equilibrium prices and outputs for different goods. In the long-run, markets select for agents whose knowledge, as reflected by their action, is superior in light of the outcomes these actions generate. In the long-run, agent action is tightly constrained by one's budget constraint. In the short-run, an agent can act with little regard for one's budget constraint. This may be unwise, especially if taken to the extreme case, but eventually, bills must be paid or else that agent loses his power. Second, we must consider the rate of feedback that economic agents receive concerning their investments. Information cascades dominate human decision-making (Bikchandani, Hirshleifer, and Welch, 1998; Earl, Peng, and Potts, 2007). That is, agents learn from one another, copying those who appear to be successful and what appears to be common wisdom. The intelligence of one or a few are shared among many. Think about responses of investors, lay and professional, to perceived opportunities in the last two booms that dominated U.S. financial markets. Common perception went something like:

"The housing market is looking good and remember that the average price of real estate almost never falls!" 
"Have you heard about investments in tech? Better profit from the rise of the internet while we still can!"

Why didn't investors see these "bubbles" coming if markets are truly rational?

Markets are rational, but we sometimes forget to ask what it is that makes them rational. Failure makes them rational. Rational expectations and the efficient market hypothesis reflect the equilibrium arrived at by the market process. If there are above market rates of profit to be gained by pursuing one investment over another, those who invest in those markets will grow their total wealth and, in the process, push the rate of profit for that investment back down toward the market average. Those who miss out on these opportunities will grow at a lower rate relative to those who gained from them. Likewise, those whose investments return a below average rate of profit will be encouraged to pursue other avenues in light of opportunity cost. Those from this category whose profits are actually negative receive a strong signal to scale down their efforts or leave the market. In this process, many agents may guess wrong.

Ex post, after the fact, the market tends to select out those whose efforts fail. This tends to empower agents whose expectations ex ante, before hand, are more correct. Market selection make rational expectations and efficient market hypotheses hold in the long-run. The existence of short-run fluctuations are well observed. In the short-run, changes in stock prices follow a Cauchy distribution (Fama 1965). Whatever their form, we do not experience these distributions themselves. Rather, we experience states that, over time, comprise these distributions. Rational expectations and the efficient market hypothesis don't dispute this. The long-run models derived from them acknowledge that markets are, on average, right, even if they are unstable at times.

How do we describe the short-run? Criteria for efficiency hold constant agent belief, but it is this belief that can experience tremendous flux in the short-run.Without a theory of agent knowledge and the market process driven by this knowledge, a robust theory of the short-run in economics lies out of reach. History matters. Changes in nominal aggregates may not affect real aggregates in equilibrium, but they do affect the structure of society and they matter outside of equilibrium. We must take care in elaborating the implications of macroeconomic models. Short-run deviations can have long-run affects on capital structure. This is true in terms of physical capital in markets as well as social capital. Thus, the Great Depression radically changed economic, political, and academic landscape. In our own field, it allowed for the "Keynesian Diversion" that lasted for several decades! From a given point of departure, it may lead to a realization of an inferior adjacent possible, which economic theory should consider.

Tuesday, October 4, 2016

Visualize the Behavior Space in Netlogo: 1.1

Last month, I posted code for visualizing the behavior-space using Python to process data from NetLogo. See that post for the netlogo template you will want to use. The py file I posted before was not the easiest to interpret or to work with. It required that a programmer plan ahead, noting the position of runs with particular values for exogenously determined parameters. I rewrote the code so that the data processing is more efficient and easier to work with. The instance I am posting works with a behavior space where only two variables have been set exogenously across runs.  However, if it was necessary, a third or fourth variable could be altered in the same set of runs so long as that third and/or fourth variable was assigned a particular value as runs are collected in lines 196-209. Also note that I have reduced the variety of heatmaps presented since only one is required to convey the process. I loop the image generator code to create a series of frames.

I may eventually post such an extension of the code if I generate output that requires this in the future. For now, I wanted to leave you with something easier to work with. Remember, you should end up with something that looks like this (from earlier post):



See the updated code below:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import pandas as pd
from matplotlib.backends.backend_pdf import PdfPages

# of csv files generated by netlogo = # of runs
files = 1280

# length of each run of the model
ticks = 50

# these are the values of interest
# in this case we adjusted water metabolism and sugar metabolism
axis_values = np.array([.45, .5, .55, .6, .65, .7, .75, .8])

# length of x-axis and y-axis for heatmap
num_x = len(axis_values)
num_y = num_x
runs_per_setting = 20

# min and max x and y parameters for heatmap
min_sugar_metabolism = .45
max_sugar_metabolism = .8
min_water_metabolism = .45
max_water_metabolism = .8

# increments tested (here we assume constant increment, could be logged, quadratic, etc...)
inc_sugar = .05
inc_water = .05
            
# set up names of each column of data from csv
names = pd.Series([
    'sugar_metabolism_rate',
    'water_metabolism_rate',
    'sugar', 
    'water',
    'mean_price',
    'price_variance',
    'population', 
    'basic_only',
    'basic_herder',
    'basic_arbitrageur', 
    'basic_herder_arbitrageur', 
    'switcher_only', 
    'switcher_herder', 
    'switcher_arbitrageur', 
    'switcher_herder_arbitrageur', 
    'percent_basic',
    'percent_arbitrageur', 
    'percent_herder', 
    'percent_switcher', 
    'basic_only_wealth',
    'basic_herder_wealth',
    'basic_arbitrageur_wealth',
    'basic_herder_arbitrageur_wealth',
    'switcher_only_wealth',
    'switcher_herder_wealth',
    'switcher_arbitrageur_wealth',
    'switcher_herder_arbitrageur_wealth', 
    'sugar_flow', 
    'water_flow',
    'distance_from_equilibrium_price',
    'fifty_period_RAP', 
    'mean_rate_of_price_change'])
# names dict used to instantiate an m X n array that will be used
# to keep track of the average value of each variable at each tick for each category
# will be inserted into mean_values dataframe
names_dict = {
    'sugar_metabolism_rate': [np.arange(ticks)],              
    'water_metabolism_rate': [np.arange(ticks)],              
    'sugar': [np.arange(ticks)],                        
    'water': [np.arange(ticks)],
    'mean_price': [np.arange(ticks)], 
    'price_variance': [np.arange(ticks)],
    'population': [np.arange(ticks)],
    'basic_only': [np.arange(ticks)], 
    'basic_herder': [np.arange(ticks)],
    'basic_arbitrageur': [np.arange(ticks)],
    'basic_herder_arbitrageur': [np.arange(ticks)],
    'switcher_only': [np.arange(ticks)],
    'switcher_herder': [np.arange(ticks)],
    'switcher_arbitrageur': [np.arange(ticks)],
    'switcher_herder_arbitrageur': [np.arange(ticks)],
    'percent_basic': [np.arange(ticks)],
    'percent_arbitrageur': [np.arange(ticks)],
    'percent_herder': [np.arange(ticks)], 
    'percent_switcher': [np.arange(ticks)],
    'basic_only_wealth': [np.arange(ticks)],
    'basic_herder_wealth': [np.arange(ticks)],
    'basic_arbitrageur_wealth': [np.arange(ticks)],
    'basic_herder_arbitrageur_wealth': [np.arange(ticks)],
    'switcher_only_wealth': [np.arange(ticks)],
    'switcher_herder_wealth': [np.arange(ticks)],
    'switcher_arbitrageur_wealth': [np.arange(ticks)],
    'switcher_herder_arbitrageur_wealth': [np.arange(ticks)], 
    'sugar_flow': [np.arange(ticks)],
    'water_flow': [np.arange(ticks)],
    'distance_from_equilibrium_price': [np.arange(ticks)],
    'fifty_period_RAP': [np.arange(ticks)],
    'basic_only_wealth_per_capita': [np.arange(ticks)],
    'basic_herder_wealth_per_capita': [np.arange(ticks)],
    'basic_arbitrageur_wealth_per_capita': [np.arange(ticks)],
    'basic_herder_arbitrageur_wealth_per_capita': [np.arange(ticks)],
    'switcher_only_wealth_per_capita': [np.arange(ticks)],
    'switcher_herder_wealth_per_capita': [np.arange(ticks)],
    'switcher_arbitrageur_wealth_per_capita': [np.arange(ticks)],
    'switcher_herder_arbitrageur_wealth_per_capita': [np.arange(ticks)],             
    'percent_basic_only': [np.arange(ticks)],
    'percent_basic_herder': [np.arange(ticks)],
    'percent_basic_arbitraguer': [np.arange(ticks)],
    'percent_basic_herder_arbitraguer': [np.arange(ticks)],
    'percent_switcher_only': [np.arange(ticks)],
    'percent_switcher_herder': [np.arange(ticks)],
    'percent_switcher_arbitraguer': [np.arange(ticks)],
    'percent_switcher_herder_arbitraguer': [np.arange(ticks)],
}

# Lists for used to generate new categories of data
new_category = pd.Series([
    'basic_only_wealth_per_capita',
    'basic_herder_wealth_per_capita',
    'basic_arbitrageur_wealth_per_capita',
    'basic_herder_arbitrageur_wealth_per_capita',
    'switcher_only_wealth_per_capita',
    'switcher_herder_wealth_per_capita',
    'switcher_arbitrageur_wealth_per_capita',
    'switcher_herder_arbitrageur_wealth_per_capita',
    'percent_basic_only',
    'percent_basic_herder',
    'percent_basic_arbitrageur',
    'percent_basic_herder_arbitrageur',
    'percent_switcher_only',
    'percent_switcher_herder',
    'percent_switcher_arbitrageur',
    'percent_switcher_herder_arbitrageur'])
  
numerator_category = pd.Series([
    'basic_only_wealth',
    'basic_herder_wealth',
    'basic_arbitrageur_wealth',
    'basic_herder_arbitrageur_wealth',
    'switcher_only_wealth',
    'switcher_herder_wealth',
    'switcher_arbitrageur_wealth',
    'switcher_herder_arbitrageur_wealth',
    'basic_only',
    'basic_herder',
    'basic_arbitrageur',
    'basic_herder_arbitrageur',
    'switcher_only',
    'switcher_herder',
    'switcher_arbitrageur',
    'switcher_herder_arbitrageur'])

denominator_category = pd.Series([
    'basic_only',
    'basic_herder',
    'basic_arbitrageur',
    'basic_herder_arbitrageur',
    'switcher_only_wealth',
    'switcher_herder_wealth',
    'switcher_arbitrageur_wealth',
    'switcher_herder_arbitrageur',
    'population',
    'population',
    'population',
    'population',
    'population',
    'population',
    'population',
    'population'])
        
def process_data():
    # Dataframe will house the average values of each variable across runs for some given 
    # pair of parameters
    mean_values = pd.DataFrame(columns = [np.arange(num_x)], index = [np.arange(num_y)])    
    for x in range(0,num_x):
        for y in range(0,num_y):
            mean_values[x][y] = pd.DataFrame(names_dict)
    # import csvs using names array as headers for each column
    # order of elements in names array should match order of objects recorded
    # during runs in NetLogo

    for i  in range(1, files + 1):
        filename = str(i) + 'sugarscapeGlobalTradeBasics.csv'
        runs = pd.read_csv(filename, names = names)
        for x in range(len(new_category)):
            add_per_capita_categories(runs, new_category[x], numerator_category[x], denominator_category[x])
    
        c = 0
        
        # Find runs whose fixed parameter values match the values for the target
        # coordinates in mean values, will eventually transfer to heatmap
        for a in axis_values:
            if runs.iloc[0]['sugar_metabolism_rate'] == a: # round(a,2):
                for b in axis_values:
                    if runs.iloc[0]['water_metabolism_rate'] == b: # round(b,2):
                        x =  int(round(((b - min_water_metabolism) / inc_water),0))
                        y =  int(round(((a - min_sugar_metabolism) / inc_sugar),0))

                        if c < 1:
                            # avoid error in 0th element of each category 
                            mean_values[x][y] = runs
                        else:
                            # once the first run data has been placed 
                            mean_values[x][y] = mean_values[x][y].add(runs, fill_value = 0)

                        c += 1

    mean_values = mean_values / runs_per_setting
    print_behavior_space_representation(mean_values)
            
##############################################################################################                        
##############################################################################################

def add_per_capita_categories(target_array, per_capita_name, per_capita_numerator, population_denominator):
    target_array[per_capita_name] = target_array[per_capita_numerator] / target_array[population_denominator]
    

def initialize_image(num_x, num_y):
 image = []
 for i in range(num_y):
  x_colors = []
  for j in range(num_x):
   x_colors.append(0)
  image.append(x_colors)
 return image


def color_points(title, filename, category, mean_values,pp, tick, min_val, max_val):
    # x_p = num_x
    # y_p = num_y
    image = initialize_image(num_x, num_y)
    
    for i in range(0, num_x):
        for j in range(0, num_y):
            image[i][j] = mean_values[i][j].iloc[tick][category]
    print(image)        
    plt.imshow(image, origin='lower', extent=(min_sugar_metabolism, max_sugar_metabolism,  min_water_metabolism, max_water_metabolism),
               cmap=cm.Greys_r, interpolation = 'nearest')
    plt.colorbar()
    plt.clim(min_val, max_val)
    plt.xlabel('Water Consumption Rate')
    plt.ylabel('Sugar Consumption Rate')
    plt.title(title + " " + str(tick + 1) + " Ticks")
    fig = plt.gcf()
    plt.show()
    plt.draw()
    pp.savefig(fig)

#    fig.savefig(filename + ".pdf")

def print_behavior_space_representation(data):  
    interval = 1
    pp = PdfPages('population_local_basic_tick_50.pdf')
    for q in range(1, ticks, interval):    
        color_points("Population\nGlobal Trade Basic","Population Global Trade Basic", "population", data, pp, q, 0, 300)
    plt.close('all')
    pp.close()
                    
if __name__ == '__main__':     
    np.save('data',    process_data()) 
    np.load('data.npy')