From Unemployment to Lisp: Running GPT-2 on a Teen's Deep Learning Compiler

116 points
1/21/1970
16 days ago
by AymanB

Comments


AymanB

A couple months ago I found myself unemployed, uncertain about what to do next. I wanted to learn more about deep learning, but from a systems prespective. Coming from Andrew's Ng course on supervised learning, I was eager to learn more about how deep learning frameworks (or deep learning compilers) like Pytorch or Tinygrad.

I started to poke around Tinygrad, learning from the tutorials I found online, and I found it fascinating because it was an actual compiler, it took conventional python code and translated them into an Abstract Syntax Tree that was parsed into UOps and ScheduleItems, to finally have a codegen layer. While the design was interesting, the code was hard to read.

That's when I stumbled across something completly unexpected, A deep learning compiler built on Common Lisp, maintained by a Japanese 18-year-old during his gap year. And currently we have acomplished something great, it can run gpt2!

For now, it just generates C-kernels, but in the future we would like to support cuda codegen as well as many other features, and serve as a learning tool for anyone who would like to get to work on deep learning compilers in Common Lisp.

16 days ago

a2code

In general, a compiler takes source code and generates object code or an executable. Can you elaborate on what your compiler takes as input and generates as an output?

16 days ago

AymanB

Hello! Thanks for your question.

First of all, there are three layers of abstraction within Caten:

1. caten/apis | High-Level Graph Interface 2. caten/air | Low-Level Graph Interface 3. caten/codegen | AIR Graph => Kernel Generator

The inputs of the compiler are just Common Lisp classes (similar to torch modules). For example, in Common Lisp, we could create a module that does SinCos:

    (defclass SinCos (Func) nil
      (:documentation "The func SinCos computes sin(cos(x))"))

    ;; Forward creates a lazy tensor for the next computation.
    ;; You can skip this process by using the `st` macro.
    (defmethod forward ((op SinCos) &rest tensors)
      (st "A[~] -> A[~]" (tensors)))

    ;; Backward is optional (skipped this time)
    (defmethod backward ((op SinCos) &optional prev-grad)
      (declare (ignore prev-grad))
      nil)

    ;; Lower describes the lowered expression of `SinCos`
    (defmethod lower ((op SinCos) &rest inputs)
      (let ((x (car inputs)))
        (with-context
          (a (%sin (%add x (%fconst (/ pi 2)))))
          (b (%sin a)))))
The `apis` layer is the high-level interface, while the `lower` method is the lower-level step before code generation.

Next, the framework generates an Abstract VM (AVM) representation:

    #S(AVM :GRAPH Graph[seen=NIL, outputs=(STC6466_1)] {
      <ALLOCATE : TID6464 <- (shape=(1), stride=(1)) where :dtype=FLOAT32>
      <Node[BUFFER] ALLOCATE(NID6480) : SID6479* <- ()>
      <Node[BINARYOPS] ADD(NID6484) : BID6483* <- (TID6464, LID6481)>
      <Node[UNARYOPS] SIN(NID6486) : UID6485* <- (BID6483)>
      <Node[UNARYOPS] SIN(NID6488) : UID6487* <- (UID6485)>
      <Node[SPECIAL/VM] PAUSE/BACKWARD(NID6501) : STC6466_1* <- (UID6487)>
    })
Then, the computation graph is translated into schedule items:

    FastGraph[outputs=(val_6)] {
      { Allocate } : [ val_0 <- (1) ]
      { KERNEL } : [ val_5 <- val_1, val_0 :name=FUSED_SIN_SIN_ADD_LOAD6511]
    }
Finally, the code generation step produces the following C code:

    void fused_sin_sin_add_load6511(float* val_5, const float* restrict val_0);
    void fused_sin_sin_add_load6511(float* val_5, const float* restrict val_0) {
        val_5[0] = sin(sin((val_0[0] + 1.5707964)));
    }
This C code is compiled by a C compiler and executed.

So to answer your question: the compiler takes Common Lisp code and generates C functions.

15 days ago

e40

Few compilers generate object code. Most generate an intermediate form, often assembler, which is converted to object code by another tool.

15 days ago

[deleted]
16 days ago

MGzBezycyjEQrk5

so what does this do? what is meant by deep learning compiler? from the given examples, Apache TVM sounds like a ML graph execution engine and model deployment solution. ditto TinyGrad. Is Caten a Lisp ML framework?

16 days ago

AymanB

Hello, it is the same idea as tinygrad and apache tvm but in common lisp.

So yes, you could call it a lisp ml framework.

15 days ago