Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Images missing when 'img' tag under a 'picture' tag has no 'src' attribute #398

Open
WetHat opened this issue Sep 12, 2024 · 0 comments
Open

Comments

@WetHat
Copy link

WetHat commented Sep 12, 2024

extractFormHtml returns <figure> elements without any image when parsing the content of: https://towardsdatascience.com/how-tiny-neural-networks-represent-basic-functions-8a24fce0e2d5

The linked images in that web page have this structure:

<figure>
    <div>
        <picture>
            <source> 
            <img>
        </picture>
    </div>
</figure>

I've checked in confiig.js that all involved tags are allowed. Further inspection revealed that the <img> tag has no src attribute. I believe this is not valid. I'm not sure if the HTML parser should use the srcset attribute of a <source> tag in enclosing <picture> tag to pick an image src.

A workaround would be to add a custom transformation like so:

fixImagesWithoutSrc(doc: Document) {
    doc.body.querySelectorAll("picture > img:not([src]").forEach(img => {
        const sources = img.parentElement?.getElementsByTagName("source");
        if (sources && sources.length > 0) {
            const
                source = sources[0],
                srcset = source.getAttribute("srcset");
            if (srcset) {
                // inject a src attribute
                img.setAttribute("src",srcset.slice(0,srcset.indexOf(" ")));
            }
        }
    });
}

The returned JSON object (without the workaround) looks like this:

{
  "url": "https://towardsdatascience.com/how-tiny-neural-networks-represent-basic-functions-8a24fce0e2d5",
  "title": "How Tiny Neural Networks Represent Basic Functions",
  "description": "A gentle introduction to mechanistic interpretability through simple algorithmic examplesIntroductionThis article shows how small Artificial Neural Networks (NN) can represent basic functions. The goal is to...",
  "links": [
    "https://towardsdatascience.com/how-tiny-neural-networks-represent-basic-functions-8a24fce0e2d5"
  ],
  "image": "https://miro.medium.com/v2/resize:fit:1024/1*29Hja14Ep12c5-XAlwcrrg.jpeg",
  "content": "<div><div><h2 id="cfa6">A gentle introduction to mechanistic interpretability through simple algorithmic examples</h2><div><a target="_blank" href="https://medium.com/@taubenfeld9?source=post_page-----8a24fce0e2d5--------------------------------"><div><p><img alt="Amir Taubenfeld" src="https://miro.medium.com/v2/resize:fill:88:88/1*shyRd2z70-23Xh4_YAIUMA.jpeg" /></p></div></a><a target="_blank" href="https://towardsdatascience.com/?source=post_page-----8a24fce0e2d5--------------------------------"><div><p><img alt="Towards Data Science" src="https://miro.medium.com/v2/resize:fill:48:48/1*CJe3891yB1A1mzMdqemkdg.jpeg" /></p></div></a></div></div><figure></figure><h2 id="9dc8">Introduction</h2><p>This article shows how small Artificial Neural Networks (NN) can represent basic functions. The goal is to provide fundamental intuition about how NNs work and to serve as a gentle introduction to <a href="https://transformer-circuits.pub/2022/mech-interp-essay/index.html" target="_blank">Mechanistic Interpretability</a> — a field that seeks to reverse engineer NNs.</p><p>I present three examples of elementary functions, describe each using a simple algorithm, and show how the algorithm can be “coded” into the weights of a neural network. Then, I explore if the network can learn the algorithm using backpropagation. I encourage readers to think about each example as a riddle and take a minute before reading the solution.</p><h2 id="63b3">Machine Learning Topology</h2><p>This article attempts to break NNs into discrete operations and describe them as algorithms. An alternative approach, perhaps more common and natural, is looking at the continuous topological interpretations of the linear transformations in different layers.</p><p>The following are some great resources for strengthening your topological intuition:</p><ul><li><a href="https://playground.tensorflow.org/#activation=tanh&amp;batchSize=10&amp;dataset=circle&amp;regDataset=reg-plane&amp;learningRate=0.03&amp;regularizationRate=0&amp;noise=0&amp;networkShape=4,2&amp;seed=0.91521&amp;showTestData=false&amp;discretize=false&amp;percTrainData=50&amp;x=true&amp;y=true&amp;xTimesY=false&amp;xSquared=false&amp;ySquared=false&amp;cosX=false&amp;sinX=false&amp;cosY=false&amp;sinY=false&amp;collectStats=false&amp;problem=classification&amp;initZero=false&amp;hideText=false" target="_blank">Tensorflow Playground</a> — a simple tool for building basic intuition on classification tasks.</li><li><a href="https://cs.stanford.edu/people/karpathy/convnetjs//demo/classify2d.html" target="_blank">ConvnetJS Demo</a> — a more sophisticated tool for visualizing NNs for classification tasks.</li><li><a href="http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/" target="_blank">Neural Networks, Manifolds, and Topology</a> — a great article for building topological intuition of how NNs work.</li></ul><h2 id="4e2b">Three Elementary Functions</h2><p>In all the following examples, I use the terminology “neuron” for a single node in the NN computation graph. Each neuron can be used only once (no cycles; e.g., not RNN), and it performs 3 operations in the following order:</p><ol><li>Inner product with the input vector.</li><li>Adding a bias term.</li><li>Running a (non-linear) activation function.</li></ol><figure></figure><p>I provide only minimal code snippets so that reading will be fluent. This <a href="https://colab.research.google.com/drive/1zt9lVUH9jH2zx5nsFA_4Taq6Ic-ve09C?usp=sharing" target="_blank">Colab notebook</a> includes the entire code.</p><h2 id="39c8">The &lt; operator</h2><p>How many neurons are required to learn the function “x &lt; 10”? Write an NN that returns 1 when the input is smaller than 10 and 0 otherwise.</p><h2 id="c36b">Solution</h2><p>Let’s start by creating sample dataset that follows the pattern we want to learn</p><pre><span>X = [[i] for i in range(-20, 40)]<br />Y = [1 if z[0] &lt; 10 else 0 for z in X]</span></pre><figure><figcaption>Creating and visualizing the training data for “&lt; operator”</figcaption></figure><p>This classification task can be solved using <a href="https://en.wikipedia.org/wiki/Logistic_regression" target="_blank">logistic regression</a> and a <a href="https://en.wikipedia.org/wiki/Sigmoid_function" target="_blank">Sigmoid</a> as the output activation. Using a single neuron, we can write the function as <em>Sigmoid(ax+b)</em>. <em>b</em>, the bias term, can be thought of as the neuron’s threshold. Intuitively, we can set <em>b = 10</em> and <em>a = -1</em> and get F=Sigmoid(10-x)</p><p>Let’s implement and run F using PyTorch</p><pre><span>model = nn.Sequential(nn.Linear(1,1), nn.Sigmoid())<br />d = model.state_dict()<br />d["0.weight"] = torch.tensor([[-1]]).float()<br />d['0.bias'] = torch.tensor([10]).float()<br />model.load_state_dict(d)<br />y_pred = model(x).detach().reshape(-1)</span></pre><figure><figcaption>Sigmoid(10-x)</figcaption></figure><p>Seems like the right pattern, but can we make a tighter approximation? For example, F(9.5) = 0.62, we prefer it to be closer to 1.</p><p>For the Sigmoid function, as the input approaches -∞ / ∞ the output approaches 0 / 1 respectively. Therefore, we need to make our 10 — x function return large numbers, which can be done by multiplying it by a larger number, say 100, to get F=Sigmoid(100(10-x)), now we’ll get F(9.5) =~1.</p><figure><figcaption>Sigmoid(100(10-x))</figcaption></figure><p>Indeed, when training a network with one neuron, it converges to F=Sigmoid(M(10-x)), where M is a scalar that keeps growing during training to make the approximation tighter.</p><figure><figcaption>Tensorboard graph — the X-axis represents the number of training epochs and the Y-axis represents the value of the bias and the weight of the network. The bias and the weight increase/decrease in reverse proportion. That is, the network can be written as M(10-x) where M is a parameter that keeps growing during training.</figcaption></figure><p>To clarify, our single-neuron model is only an approximation of the “&lt;10” function. We will never be able to reach a loss of zero, because the neuron is a continuous function while “&lt;10” is not a continuous function.</p><h2 id="a98e">Min(a, b)</h2><p>Write a neural network that takes two numbers and returns the minimum between them.</p><h2 id="86e5">Solution</h2><p>Like before, let’s start by creating a test dataset and visualizing it</p><pre><span>X_2D = [<br />[random.randrange(-50, 50),<br /> random.randrange(-50, 50)]<br /> for i in range(1000)<br />]<br />Y = [min(a, b) for a, b in X_2D]</span></pre><figure><figcaption>Visualizing the training data for Min(a, b). The two horizontal axes represent the coordinates of the input. The vertical axis labeled as “Ground Truth” is the expected output — i.e., the minimum of the two input coordinates</figcaption></figure><p>In this case, ReLU activation is a good candidate because it is essentially a maximum function (ReLU(x) = max(0, x)). Indeed, using ReLU one can write the min function as follows</p><pre><span>min(a, b) = 0.5 (a + b -|a - b|) = 0.5 (a + b - ReLU(b - a) - ReLU(a - b))</span></pre><p><strong><em>[Equation 1]</em></strong></p><p>Now let’s build a small network that is capable of learning <em>Equation 1</em>, and try to train it using gradient descent</p><pre><span>class MinModel(nn.Module):<br />  def __init__(self):<br />      super(MinModel, self).__init__()<p>      # For ReLU(a-b)<br />      self.fc1 = nn.Linear(2, 1)<br />      self.relu1 = nn.ReLU()<br />      # For ReLU(b-a)<br />      self.fc2 = nn.Linear(2, 1)<br />      self.relu2 = nn.ReLU()<br />      # Takes 4 inputs<br />      # [a, b, ReLU(a-b), ReLU(b-a)]<br />      self.output_layer = nn.Linear(4, 1)</p><p>  def forward(self, x):<br />      relu_output1 = self.relu1(self.fc1(x))<br />      relu_output2 = self.relu2(self.fc2(x))<br />      return self.output_layer(<br />          torch.cat(<br />             (x, Relu_output1, relu_output2),<br />             dim=-1<br />          )<br />      )</p></span></pre><figure><figcaption>Visualization of the MinModel computation graph. Drawing was done using the <a href="https://github.com/mert-kurttutan/torchview" target="_blank">Torchview</a> library</figcaption></figure><p>Training for 300 epochs is enough to converge. Let’s look at the model’s parameters</p><pre><span>&gt;&gt; for k, v in model.state_dict().items():<br />&gt;&gt;   print(k, ": ", torch.round(v, decimals=2).numpy())<p>fc1.weight :  [[-0. -0.]]<br />fc1.bias :  [0.]<br />fc2.weight :  [[ 0.71 -0.71]]<br />fc2.bias :  [-0.]<br />output_layer.weight :  [[ 1.    0.    0.   -1.41]]<br />output_layer.bias :  [0.]</p></span></pre><p>Many weights are zeroing out, and we are left with the nicely looking</p><pre><span>model([a,b]) = a - 1.41 * 0.71 ReLU(a-b) ≈ a - ReLU(a-b)</span></pre><p>This is not the solution we expected, but it is a valid solution and even <strong>cleaner than Equation 1! </strong>By looking at the network we learned a new nicely looking formula! Proof:</p><p>Proof:</p><ul><li>If <em>a &lt;= b: model([a,b]) = a — ReLU(a-b) = a — 0 = a</em></li><li>If <em>a &gt; b: a — ReLU(a-b) = a — (a-b) = b</em></li></ul><h2 id="f31f">Is even?</h2><p>Create a neural network that takes an integer x as an input and returns x mod 2. That is, 0 if x is even, 1 if x is odd.</p><p>This one looks quite simple, but surprisingly it is impossible to create a finite-size network that correctly classifies each integer in (-∞, ∞) (using a standard non-periodic activation function such as ReLU).</p><h2 id="c7fe"><strong><em>Theorem: is_even needs at least log neurons</em></strong></h2><p><em>A network with ReLU activations requires at least n neurons to correctly classify each of 2^n consecutive natural numbers as even or odd (i.e., solving is_even).</em></p><h2 id="d509"><strong><em>Proof: Using Induction</em></strong></h2><p><strong>Base: n == 2:</strong> Intuitively, a single neuron (of the form <em>ReLU(ax + b)</em>), cannot solve <em>S = [i + 1, i + 2, i + 3, i + 4]</em> as it is not linearly separable. For example, without loss of generality, assume <em>a &gt; 0 </em>and <em>i + 2</em> is even<em>. </em>If <em>ReLU(a(i + 2) + b) = 0, </em>then also <em>ReLU(a(i + 1) + b) = 0 </em>(monotonic function)<em>, </em>but <em>i + 1</em> is odd.<br />More <a href="https://en.wikipedia.org/wiki/Perceptrons_(book)#The_XOR_affair" target="_blank">details</a> are included in the classic Perceptrons book.</p><p><strong>Assume for n, and look at n+1: </strong><em>Let S = [i + 1, …, i + 2^(n + 1)]</em>, and assume, for the sake of contradiction, that <em>S</em> can be solved using a network of size <em>n</em>. Take an input neuron from the first layer <em>f(x) = ReLU(ax + b)</em>, where <em>x</em> is the input to the network. <em>WLOG a &gt; 0</em>. Based on the definition of ReLU there exists a <em>j</em> such that: <br /><em>S’ = [i + 1, …, i + j], S’’ = [i + j + 1, …, i + 2^(n + 1)]<br />f(x ≤ i) = 0<br />f(x ≥ i) = ax + b</em></p><p>There are two cases to consider:</p><ul><li>Case <em>|S’| ≥ 2^n</em>: dropping <em>f</em> and all its edges won’t change the classification results of the network on S’. Hence, there is a network of size <em>n-1</em> that solves S’. Contradiction.</li><li>Case <em>|S’’|≥ 2^n</em>: For each neuron <em>g</em> which takes <em>f</em> as an input <em>g(x) =</em> <em>ReLU(cf(x) + d + …) = ReLU(c ReLU(ax + b) + d + …)</em>, Drop the neuron <em>f</em> and wire <em>x</em> directly to <em>g</em>, to get <em>ReLU(cax + cb + d + …)</em>. A network of size <em>n — 1</em> solves <em>S’’</em>. Contradiction.</li></ul><h2 id="c0bf">Logarithmic Algorithm</h2><p><em>How many neurons are sufficient to classify [1, 2^n]? I have proven that n neurons are necessary. Next, I will show that n neurons are also sufficient.</em></p><p>One simple implementation is a network that constantly adds/subtracts 2, and checks if at some point it reaches 0. This will require O(<em>2^n</em>) neurons. A more efficient algorithm is to add/subtract powers of 2, which will require only O(n) neurons. More formally: <br /><em>f_i(x) := |x — i|<br />f(x) := f_1∘ f_1∘ f_2 ∘ f_4∘ … ∘ f_(2^(n-1)) (|x|)</em></p><p>Proof:</p><ul><li>By definition:<em>∀ x ϵ[0, 2^i]: f_(2^(i-1)) (x) ≤ 2^(i-1).<br />I.e., cuts the interval by half.</em></li><li>Recursively<em> f_1∘ f_1∘ f_2 ∘ … ∘ f_(2^(n-1)) (|x|) </em>≤ 1</li><li>For every even <em>i: is_even(f_i(x)) = is_even(x)</em></li><li>Similarly <em>is_even(f_1( f_1(x))) = is_even(x)</em></li><li>We got <em>f(x) ϵ {0,1}</em> and <em>is_even(x) =is_even(f(x))</em>. QED.</li></ul><h2 id="cc15">Implementation</h2><p>Let’s try to implement this algorithm using a neural network over a small domain. We start again by defining the data.</p><pre><span>X = [[i] for i in range(0, 16)]<br />Y = [z[0] % 2 for z in X]</span></pre><figure><figcaption>is_even data and labels on a small domain [0, 15]</figcaption></figure><p>Because the domain contains 2⁴ integers, we need to use 6 neurons. 5 for <em>f_1∘ f_1∘ f_2 ∘ f_4∘ f_8, </em>+ 1 output neuron. Let’s build the network and hardwire the weights</p><pre><span>def create_sequential_model(layers_list = [1,2,2,2,2,2,1]):<br />  layers = []<br />  for i in range(1, len(layers_list)):<br />      layers.append(nn.Linear(layers_list[i-1], layers_list[i]))<br />      layers.append(nn.ReLU())<br />  return nn.Sequential(*layers)<p># This weight matrix implements |ABS| using ReLU neurons.<br /># |x-b| = Relu(-(x-b)) + Relu(x-b)<br />abs_weight_matrix = torch_tensor([[-1, -1],<br />                                  [1, 1]])<br /># Returns the pair of biases used for each of the ReLUs.<br />get_relu_bias = lambda b: torch_tensor([b, -b])</p><p>d = model.state_dict()<br />d['0.weight'], d['0.bias'] = torch_tensor([[-1],[1]]), get_relu_bias(8)<br />d['2.weight'], d['2.bias'] = abs_weight_matrix, get_relu_bias(4)<br />d['4.weight'], d['4.bias'] = abs_weight_matrix, get_relu_bias(2)<br />d['6.weight'], d['6.bias'] = abs_weight_matrix, get_relu_bias(1)<br />d['8.weight'], d['8.bias'] = abs_weight_matrix, get_relu_bias(1)<br />d['10.weight'], d['10.bias'] = torch_tensor([[1, 1]]), torch_tensor([0])<br />model.load_state_dict(d)<br />model.state_dict()</p></span></pre><p>As expected we can see that this model makes a perfect prediction on [0,15]</p><figure></figure><p>And, as expected, it doesn’t generalizes to new data points</p><figure></figure><p>We saw that we can hardwire the model, but would the model converge to the same solution using gradient descent?</p><figure></figure><p>The answer is — not so easily! Instead, it is stuck at a local minimum — predicting the mean.</p><p>This is a known phenomenon, where gradient descent can get stuck at a local minimum. It is especially prevalent for non-smooth error surfaces of highly nonlinear functions (such as is_even).</p><p>More details are beyond the scope of this article, but to get more intuition one can look at the many works that investigated the classic XOR problem. Even for such a simple problem, we can see that gradient descent can struggle to find a solution. In particular, I recommend Richard Bland’s short <a href="https://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR148.pdf" target="_blank">book</a> “Learning XOR: exploring the space of a classic problem” — a rigorous analysis of the error surface of the XOR problem.</p><h2 id="8b09">Final Words</h2><p>I hope this article has helped you understand the basic structure of small neural networks. Analyzing Large Language Models is much more complex, but it’s an area of research that is advancing rapidly and is full of intriguing challenges.</p><p>When working with Large Language Models, it’s easy to focus on supplying data and computing power to achieve impressive results without understanding how they operate. However, interpretability offers crucial insights that can help address issues like fairness, inclusivity, and accuracy, which are becoming increasingly vital as we rely more on LLMs in decision-making.</p><p>For further exploration, I recommend following the <a href="https://www.alignmentforum.org/" target="_blank">AI Alignment Forum</a>.</p><p>*All the images were created by the author. The intro image was created using ChatGPT and the rest were created using Python libraries.</p></div>",
  "author": "Amir Taubenfeld",
  "favicon": "https://miro.medium.com/v2/resize:fill:256:256/1*VzTUkfeGymHP4Bvav-T-lA.png",
  "source": "towardsdatascience.com",
  "published": "2024-09-10T16:55:36.262Z",
  "ttr": 375,
  "type": "article"
}
@WetHat WetHat changed the title All images missing from article Images missing from article when contained in 'figure' elements Sep 12, 2024
@WetHat WetHat changed the title Images missing from article when contained in 'figure' elements Images missing when 'img' tag under a 'picture' tag has no 'src' attribute Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant