Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added SmallestNonZero constant #46

Merged
merged 7 commits into from
May 5, 2024
Merged

Conversation

janpfeifer
Copy link
Contributor

  • Added the SmallestNonZero constant (0.00006109476).
  • Changed go.mod to depend on go > 1.13, when support for binary constants was added. It is still very old (we are in go1.22 already).

janpfeifer and others added 3 commits April 27, 2024 09:20
Added SmallestNonzero.
Removed spurious empty line.
Copy link
Collaborator

@fxamacker fxamacker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janpfeifer Thanks for opening this PR to add SmallestNonZero!

Added the SmallestNonZero constant (0.00006109476).

Just a few suggestions for your consideration, maybe we can:

  • Replace 0.00006109476 with 0.000000059604645 and shorten Float16(0b000010000000001) to Float16(0x0001)

  • Add test for SmallestNonZero == Float16(0x0001) == 0x1p-14 * 0x1p-10 == 0.000000059604645

  • Add comment that we use 1 / 2**(15 - 1 + 10) for float16, while Go's math uses 1 / 2**(127 - 1 + 23) for float32 and also mention they are both denormal values

Robert Griesemer's commit in Go has more details about float32 and float64 equivalents:
https://go-review.googlesource.com/c/go/+/315170

Thanks again for opening this PR!

float16.go Outdated Show resolved Hide resolved
@janpfeifer
Copy link
Contributor Author

Thanks for the review @fxamacker ! I added the test you suggested.

A reminder that I shouldn't blindly trust the AI that suggested the wrong smallest float16 ... Gemini suggested me that the exponent set to 1 was what would make the number be considered de-normalized (!?). Does this stand ?

In any case, I also tested that the NVidia GPU -- what I'm aiming at -- had the same interpretation of the number and it stands:

image
(snapshot of Jupyter notebook cell, using gonb)

Btw, your library allowed me to add support for float16 when training ML models in GoMLX, which in turn almost doubled the speed of my GNN (Graph Neural Network) model training -- which I'm working on now. Many thanks for putting it together.

And on that note, let me add my interest on adding support for bfloat16 😃 -- the newer GPUs and Google's TPU support that.

@x448
Copy link
Owner

x448 commented Apr 30, 2024

@fxamacker thanks for reviewing. PTAL at #48 (it reduces linter noise spotted here).

float16_test.go Outdated Show resolved Hide resolved
Copy link
Owner

@x448 x448 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janpfeifer LGTM! Thanks for contributing and very interesting followup!

Copy link
Collaborator

@fxamacker fxamacker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janpfeifer LGTM. Thanks for updating this PR! 👍

A reminder that I shouldn't blindly trust the AI that suggested the wrong smallest float16

I agree. 👍 BTW, I purchased some hardware to dive into AI but still haven't tried using AI yet due to work schedule, etc.

It must be interesting to be at Google Research involved in AI, ML, or Gemini at this point in history. BTW, the scholar.google.com link in your GitHub profile stopped working (it was working last weekend), so you may want to update it to https://research.google/people/jan-pfeifer/.

Gemini suggested me that the exponent set to 1 was what would make the number be considered de-normalized (!?). Does this stand ?

No, Gemini's suggestion to set exponent to 1 does not stand, because for IEEE 764 binary16 (aka float16) to represent a denormal (called "subnormal" in IEEE 764-2008), all the exponent bits need to be 0.

My suggestion to use float16.Float16(0x0001) intentionally sets all the exponent bits to 0 and only the least significant bit of significand is 1, which correctly produces the smallest nonzero denormal float16 value.

The implicit leading bit in the significand is always 0 for denormals and 1 for normals. Since this bit is adjacent to exponent field, maybe Gemini's suggestion was affected by that aspect. Or maybe Gemini described a different floating point (not IEEE 764 due to query or data model not including it).

There is a useful slide in the IEEE Symposium on Security & Privacy (2015) at the 2-minute mark on youtube.

I currently don't have access to IEEE 764, but one way to confirm smallest float16 denormal value (and float32 & float64 equivalents) is by using C++23 standard library:

/* Copyright © 2024 Faye Amacker
				  
    This C++23 program prints smallest nonzero floats. Compile and run:
    $ g++ -std=c++2b main.cpp -o foo && ./foo

    For comparison to equivalents in Go, see:
    https://github.com/x448/float16/pull/46
    https://go-review.googlesource.com/c/go/+/315170
*/

#include <limits>
#include <iostream>
#include <stdfloat>

template <typename T> void print(std::string name)
{
	std::cout << name << " has smallest denormal " << T::denorm_min();
	std::cout << " and normal "  << T::min() << "\n";
}

int main() {
	print<std::numeric_limits<std::float16_t>>("float16");
	print<std::numeric_limits<std::float32_t>>("float32");
	print<std::numeric_limits<std::float64_t>>("float64");
	return 0;

        /* Output:
            float16 has smallest denormal 5.96046e-08 and normal 6.10352e-05
            float32 has smallest denormal 1.4013e-45 and normal 1.17549e-38
            float64 has smallest denormal 4.94066e-324 and normal 2.22507e-308
        */
}

@fxamacker fxamacker merged commit 9c0fe7e into x448:master May 5, 2024
13 checks passed
@fxamacker
Copy link
Collaborator

Btw, your library allowed me to add support for float16 when training ML models in GoMLX, which in turn almost doubled the speed of my GNN (Graph Neural Network) model training -- which I'm working on now. Many thanks for putting it together.

Nice! 👍

It was @x448 who wrote/contributed this float16 library to my CBOR codec and I'm happy it was extracted out from fxamacker/cbor as a standalone package for other projects like GoMLX can use it too! Thanks @x448!

And on that note, let me add my interest on adding support for bfloat16 😃 -- the newer GPUs and Google's TPU support that.

@x448 PTAL at bfloat16? 🙏

@fxamacker fxamacker mentioned this pull request May 5, 2024
@janpfeifer
Copy link
Contributor Author

Thanks for explanation @fxamacker ! And I was not aware of CBOR, very neat. Clearly, I was in Google for too long... (using protobuffers)

Btw I just retired from Google last Wednesday, maybe that's why the old scholar link no longer work. Thanks for the suggested alternative (although the page is incomplete)-- it seems I'll have to create my own page.

In Google it was really interesting indeed -- even though I didn't work on Gemini: I worked with YDF, SimpleML for Sheets, TF-GNN and other internal projects. The only drawback for me was that being a manager the last years I rarely had the time to actually code / experiment myself. One of the reasons I started investing my time creating my own OSS projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants