Skip to content

Commit

Permalink
Notion - Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
shiffman authored Feb 25, 2024
1 parent 77a6de3 commit e528355
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 6 deletions.
2 changes: 0 additions & 2 deletions content/09_ga.html
Original file line number Diff line number Diff line change
Expand Up @@ -1493,7 +1493,6 @@ <h2 id="ecosystem-simulation">Ecosystem Simulation</h2>
let vy = map(noise(this.yoff), 0, 1, -this.maxspeed, this.maxspeed);
this.xoff += 0.01;
this.yoff += 0.01;

let velocity = createVector(vx, vy);
this.position.add(velocity);
}
Expand Down Expand Up @@ -1591,7 +1590,6 @@ <h3 id="genotype-and-phenotype">Genotype and Phenotype</h3>
// The bigger the bloop, the slower it is.
this.maxSpeed = map(this.dna.genes[0], 0, 1, 15, 0);
this.r = map(this.dna.genes[0], 0, 1, 0, 25);

/* All the rest of the bloop initialization */</pre>
</div>
<p>Note that the <code>maxSpeed</code> property is mapped to a range from <code>15</code> to <code>0</code>. A bloop with a gene value of <code>0</code> will move at a speed of <code>15</code>, while a bloop with a gene value of <code>1</code> won’t move at all (speed of <code>0</code>).</p>
Expand Down
12 changes: 9 additions & 3 deletions content/10_nn.html
Original file line number Diff line number Diff line change
Expand Up @@ -553,9 +553,15 @@ <h3 id="the-machine-learning-life-cycle">The Machine Learning Life Cycle</h3>
<li><strong>Prepare the data.</strong> Raw data often isn’t in a format suitable for machine learning algorithms. It might also have duplicate or missing values, or contain outliers that skew the data. Such inconsistencies may need to be manually adjusted. Additionally, as I mentioned earlier, neural networks work best with normalized data, which has values scaled to fit within a standard range. Another key part of preparing data is separating it into distinct sets: training, validation, and testing. The training data is used to teach the model (step 4), while the validation and testing data (the distinction is subtle—more on this later) are set aside and reserved for evaluating the model’s performance (step 5).</li>
<li><strong>Choose a model.</strong> Design the architecture of the neural network. Different models are more suitable for certain types of data and outputs.</li>
<li><strong>Train the model.</strong> Feed the training portion of the data through the model and allow the model to adjust the weights of the neural network based on its errors. This process is known as <strong>optimization</strong>: the model tunes the weights so they result in the fewest number of errors.</li>
<li><strong>Evaluate the model.</strong> Remember the testing data that was set aside in step 2? Since that data wasn’t used in training, it provides a means to evaluate how well the model performs on new, unseen data.</li>
<li><strong>Tune the parameters.</strong> The training process is influenced by a set of parameters (often called <strong>hyperparameters</strong>) such as the learning rate, which dictates how much the model should adjust its weights based on errors in prediction. I called this the <code>learningConstant</code> in the perceptron example. By fine-tuning these parameters and revisiting steps 4 (training), 3 (model selection), and even 2 (data preparation), you can often improve the model’s performance.</li>
<li><strong>Deploy the model. </strong>Once the model is trained and its performance is evaluated satisfactorily, it’s time to use the model out in the real world with new data!</li>
</ol>
<div class="avoid-break">
<ol>
<li value="5"><strong>Evaluate the model.</strong> Remember the testing data that was set aside in step 2? Since that data wasn’t used in training, it provides a means to evaluate how well the model performs on new, unseen data.</li>
</ol>
</div>
<ol>
<li value="6"><strong>Tune the parameters.</strong> The training process is influenced by a set of parameters (often called <strong>hyperparameters</strong>) such as the learning rate, which dictates how much the model should adjust its weights based on errors in prediction. I called this the <code>learningConstant</code> in the perceptron example. By fine-tuning these parameters and revisiting steps 4 (training), 3 (model selection), and even 2 (data preparation), you can often improve the model’s performance.</li>
<li value="7"><strong>Deploy the model. </strong>Once the model is trained and its performance is evaluated satisfactorily, it’s time to use the model out in the real world with new data!</li>
</ol>
<p>These steps are the cornerstone of supervised machine learning. However, even though 7 is a truly excellent number, I think I missed one more critical step. I’ll call it step 0.</p>
<ol>
Expand Down
1 change: 0 additions & 1 deletion content/11_nn_ga.html
Original file line number Diff line number Diff line change
Expand Up @@ -621,7 +621,6 @@ <h3 id="speeding-up-time">Speeding Up Time</h3>
for (let creature of creatures) {
creature.show();
}

//{!8} The simulation code runs multiple times according to the slider.
for (let i = 0; i &#x3C; timeSlider.value(); i++) {
for (let creature of creatures) {
Expand Down

0 comments on commit e528355

Please sign in to comment.