Training mRNA Language Models Across 25 Species for $165

147 points
1/21/1970
5 days ago
by maziyar

Comments


seamossfet

The problem with models like this is they're built on very little actual training data we can trace back to verifiable protein data. The protein data back, and other sources of training data for stuff like this, has a lot of broken structures in them and "creative liberties" taken to infer a structure from instrument data. It's a very complex process that leaves a lot for interpretation.

On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.

Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.

This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.

We've come a long way, but there's still a very very long way to go.

2 days ago

stardust2

How do we get more verifiable protein data? So even if we had better data, we don't yet understand how the structure impacts the biology?

2 days ago

pfisherman

Nice work! Here is an article you may find helpful if you have not already come across it.[0]. You may also want to consider benchmarking against some non ML methods.[1]

0. https://pubmed.ncbi.nlm.nih.gov/35318324/

1. https://www.nature.com/articles/s41586-023-06127-z

2 days ago

xyz100

What makes this dataset or problem worth solving compared to other health datasets? Would the results on this task be broadly useful to health?

2 days ago

CyberDildonics

What other "datasets" are you talking about? How do you "solve a dataset" ?

2 days ago

xyz100

You solve a dataset when you learn what there is to learn about the phenomenon of interest. The limit of such phenomenon is “cure all disease”, and clearly this is not solving that.

2 days ago

CyberDildonics

What are you talking about? "the phenomenon of interest"? There is nothing you wrote in either comment that makes sense.

What is a "dataset" that has been "solved" and what did the program do that 'solved' it?

2 days ago

xyz100

MNIST (the number classification task) has been “solved” a billion times and it is hard to imagine any subsequent advances there as scores using a variety of methods have hit the saturation point of accuracy. Any further improvements are likely overfitting to noise. Therefore, we know that it is easy to detect handwritten numbers. However, we may not know how to detect other things as well, like reading an MRI. Those datasets/tasks are clearly different and require different techniques. Training an LLM is likewise different.

a day ago

CyberDildonics

has been “solved” a billion times

If it was really solved, wouldn't it just need to happen once?

You think classifying handwriting of 10 numbers is the same as this that took 55 hours of GPU time for someone to go through?

I have no idea what point you're trying to make and I can't tell if you do either. You were talking about "solving" other "health datasets" but you can't even come up with one or what that means.

a day ago

xyz100

If you want to be literal with language, then do you ever really “solve” anything? Even tying your shoes is not solved. One day you may tie them better, but for practical purposes we can say it is solved.

Likewise, you can spend 55 hours of GPU time to produce very different things. Can those 55 hours cure cancer? Definitely not. Can it pick up correlations with a small subset of proteins that are perhaps not representative of practical problems? Probably. Can it learn a pattern to tie your shoes, given all your life experiences tying them? Sure.

I asked the question to determine what is the impact of the task and dataset. Curing cancer is huge, tying shoes is not. What are the strengths and limitations?

8 hours ago

CyberDildonics

If you want to be literal with language, then do you ever really “solve” anything?

You are the one who said it and you can't even explain what you meant, you just get mad that anyone would ask.

7 hours ago

xyz100

Since I am hitting the reply depth: You “solve” a dataset or task when you translate some model into actual real world problems by creating a model that actually “works” (not just high accuracy). What is otherwise the point of training the model other than writing blog posts? Second to that, you can train a model that performs well on the dataset but is less useful in the real world.

This is a health dataset, there are many inputs and outputs to health (e.g., cell level, protein level, tumors, organs, etc.). In this case, it is mRNA focused, which is a broad category that translates to potentially immune responses like vaccines (exactly what kind of therapy, I’m not sure other than “25 species”). Once the model is trained, you can use it to solve real problems, perhaps to develop a therapy that makes its way to clinical trials and eventually actually treats some disease. The model by itself is useless without the ability to have that impact.

So for other examples, take any disease (e.g., Covid19), create a dataset to mirror that problem using some technique (e.g., Covid19 mRNA prediction of some sort), and solve it to create a treatment (e.g., get a safe and effective vaccine). Obviously, you can say the vaccine can be improved so it is not “solved”, but most people would be quite happy with a “almost cure for cancer” even if it wasn’t literally optimal (we don’t even know if a cure for cancer is possible).

My suggestion and question to the author is to outline what is the implications of the work rather than focusing on accuracy statistics that are meaningless without such context.

7 hours ago

basyt

yeah lol no shit. lets not get bothered by reactionaries...

a day ago

nradclif

"Complete results, architectural decisions, and runnable code below."

This is a weird post, there doesn't seem to be any "below" here. Another comment linked the article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...

2 days ago

justinclift

Yeah. Things like "Complete results, architectural decisions, and runnable code below." is literally how AI outputs stuff, so I'd expect the post was AI written too. :(

a day ago

rubicon33

Can someone explain what one might use this model for? As a developer with a casual interest in biology it would be fun to play with but honestly not sure what I would do

2 days ago

colechristensen

You can get your feet wet with genetic engineering for surprisingly little money.

This guy shows a lot of how it's done: https://www.youtube.com/@thethoughtemporium

Basically you can design/edit/inject custom genes into things and see real results spending on the scale of $100-$1000.

2 days ago

com2kid

We actually did this in my highschool genetics class back in 1999! We made bacteria change color by splicing in a gene. Awesome stuff.

The (public!) school had a grant from one of Seattle's biotech boom companies.

2 days ago

someuser54541

Is there something like this in text/readable format?

2 days ago

_zoltan_

My main concern is using fungi. If it ends up in my lungs I'm most likely screwed, right?

2 days ago

nurettin

Yes, but most students produce their best work while infected.

2 days ago

colechristensen

This is the classic meme https://www.reddit.com/r/labrats/comments/mmv2ig/lab_strains...

Lab strains of things tend to be extremely sensitive and not human adapted. You shouldn't study and modify human-infecting organisms in your basement anyway. While you shouldn't ignore protective equipment and proper procedure... paranoia about infecting yourself with a lab leak isn't warranted.

2 days ago

_zoltan_

I'd love to experiment with this stuff, just literally have no idea how it would be safe to start.

a day ago

jazzpush2

A Codon-based model is cool. I know NVIDIA is building quite a large one.

At GTC they showed an SAE they built on a smaller version of it, allowing you to see what their model learned: https://research.nvidia.com/labs/dbr/blog/sae/

2 days ago

dhruv3006

Interesting work - Looks like AI for science is having it's day right now.

2 days ago

khalic

> In Progress: CodonJEPA

JEPA is going to break the whole industry :D

2 days ago

digdugdirk

Can you explain this? I haven't heard of JEPA, and from a quick search it seems to be vision/robotics based?

2 days ago

khalic

It’s a self supervised learning architecture, and it’s pretty much universal. The loss function runs on embeddings, and some other smart architectural choices allover. Worth diving into for a few hours, Yann LeCun gives some interesting talks about it

2 days ago

colingauvin

HN's blindspots never cease to amaze me.

I am a structural biologist working in pharmaceutical design and this type of thing could be wildly useful (if it works).

2 days ago

justinclift

Blind spot?

a day ago

simianwords

What makes these Domain specific models work when we don’t have good domain models for health care, chemistry, economics and so on

2 days ago

colechristensen

>we don’t have good domain models for health care, chemistry, economics and so on

Who says we don't?

2 days ago

simianwords

Examples please?

2 days ago

colechristensen

No, it's really simple to search for domain specific models being used "in production" all over the place

2 days ago

simianwords

I didn’t find a single one that outperforms a general model.

2 days ago

colechristensen

Ok, alphafold.

2 days ago

simianwords

It’s not a large language model

2 days ago

yieldcrv

Distributing the load on this will probably be infinitely more useful than “folding at home”

2 days ago

HocusLocus

gray goo of the future

2 days ago

skyskys

hmmmm seems like some fake hype.

2 days ago