Show HN: Simply explain 20k concepts using GPT

61 points
1/20/1970
a year ago
by kirill5pol

Comments


radicaldreamer

The amount of content thats going to be generated in the next few years is going to absolutely drown anything humanity has created thus far. I would not be surprised to see a wholesale return to analog/"old" knowledge once the vast majority on the network becomes unreliable/generated.

a year ago

waboremo

Yes, but this also assumes that a lot of the content created by humanity has been worthwhile. A lot of it is pure noise. Youtube is a really great example of this, in between the "hidden gems" and "popular high quality" works there's a whole bunch of noise that might as well have been generated by AI. Actually I wouldn't be surprised if they were at least partially AI generated, they just make some cheap voice actor read their generated script for them.

I think you're going to see a very rapid pendulum swinging here. With an absurd amount of content being generated and then flooding the various platforms, and in turn platforms are going to try and combat this by creating more centralized sources of information. The return to analog knowledge seems a bit far-fetched. I highly doubt that would be an outcome, if only because convenience trumps it. Look at librarians, you can talk to one and get much better direction of information than asking google, but few people do that. I can't see that changing.

a year ago

humanistbot

> Yes, but this also assumes that a lot of the content created by humanity has been worthwhile. A lot of it is pure noise.

And all of those models have been indiscriminately trained on the sum total of that pure noise. What you are getting from these models is a cleaned-up, grammatically-perfect, auto-editorialized synthesis of that pure noise.

a year ago

davisoneee

Right now, the ability for people to put out shit content is limited by human timescales.

AI algorithms can generate 'noise' at a much faster rate than people can, so it will be even harder to find the hidden gems.

a year ago

blakers95

And what happens when the models themselves are trained primarily on the data they produced?

a year ago

danielmarkbruce

They will spit out crappy, copycat material which might be fake. Like 90% of the material created by humans. And then the people with something valuable to say will have something original and valuable. And we'll be back to having a search/curation/review problem on our hands.

a year ago

vkou

I don't learn from 90% of material created by humans.

I learn from the 1% of material that is created by humans who are, to some degree, experts in their field.

Sometimes it's also wrong, but it's not because they were just lazily regurgitating rando 'net posts.

a year ago

overengineer

We already pretrain them on random internet crap then finetune on supervised data

a year ago

globalise83

For a similar idea, but human-generated (by leading experts in the respective fields), I can recommend the Very Short Introduction series by Oxford University Press. https://en.wikipedia.org/wiki/Very_Short_Introductions

a year ago

empdee

First page I clicked was the Computer Science...Sudoku page. Really?: At each step of the recursion, you try out all of the available options, and as soon as one of them leads to the correct solution, you backtrack to the previous step to explore any remaining options there or to try a different option there.

a year ago

gagabity

I think it would be interesting if you periodically re-ask the same question maybe even to different AIs and keep all versions of answers accessible so its possible to see the evolution & comparison.

a year ago

nickvec

"A collection of notes on (allegedly) written by Plato himself."

Might want to remove "on" to make the sentence grammatically correct.

a year ago

kirill5pol

Oops thanks for catching that!

a year ago

odood

Congratulations on your website. Blackhat SEO's 15 years ago would've been jealous. Let us know if it ranks.

a year ago

Danjoe4

I hate that this is what SEO has come to but I also respect the grind

a year ago

moltar

What prompt did you use?

a year ago

amelius

Sounds like you reinvented Wikipedia's "Simple English" pages, but in a way that can't be trusted very much.

a year ago

thenberlin

Yeah, no kidding. This is what LLMs are not meant for 101 -- it'll get a lot right, a solid amount of the time (spare me the anecdotes), but the idea that someone who doesn't know anything about a topic should take its output for ground truth is exactly the sort of misplaced trust that risks flipping these models from massive technological/productivity/scientific/etcetc leap into existential threat.

a year ago

jszymborski

Big agree. It feels wrong to dissuade someone from building something, this is likely an exercise in creativity, but I have to say this goes on my "Bad Use of LLMs" list.

Sure, it's possible to "learn" from LLMs in that they might spark some idea that you might not have thought of, but taking the output from LLMs as a source of knowledge is exactly what you shouldn't use an LLM for.

a year ago

humanistbot

> It feels wrong to dissuade someone from building something

I understand this feeling, but it is important to push back against it, most importantly for the developer's own development. Not all ideas are good. I'd easily wager that most ideas are bad or at least not good, at least going from my own thought process. That is why we have places like HN, to share new ideas and get feedback. It is the best thing for the person with the idea (and also society) when their blatantly bad idea is responded to with generous, kind, informative, and direct criticism.

a year ago

kirill5pol

Yeah, it's definitely not at the level where it can be fully trusted yet, but I found GPT (and this) quite helpful for learning about something new where I don't have any background in. That's also why I think that something like showing the source of something might be a good way to improve the trust (although also still not perfect...)

a year ago

humanistbot

If you don't have any background in a topic, then how do you know it is getting it right? You don't know enough to know if you can trust any given response. Either you have to independently verify what it says with trusted sources, or you just throw your hands up and accept that you're getting taught by the digital equivalent of a once-brilliant professor who did too many hits of LSD.

a year ago

Mamadou_H

Valid point, however, I personally believe that the questions generated hold greater significance as they provide direction in terms of what to explore in a particular subject —something that GPT by itself does not provide as it needs to be prompted first—

a year ago

thih9

Another data point, I recently asked ChatGPT for TV show recommendations and got helpful (and not made up) results.

What was the subject and what prompt have you used when asking about the books?

(looks like parent comment has been edited; earlier it mentioned asking GPT for good book recommendations and getting made up results)

a year ago

amelius

I don't recall, but I remember posting it here and some other people tried it as well and noticed the same. Anyway, I removed the "book" part of my comment briefly after posting because I thought it detracted from the main point of it.

a year ago

vkou

Every time I ask an LLM to teach me something I don't know, I am incredibly impressed by the quality of its answer.

But when I ask an LLM to tell me something that I am an expert in, I am usually incredibly disappointed by the bullshit it spews.

a year ago

lxgr

To be fair, this is how I often feel reading the news.

a year ago

Mamadou_H

this particular use of LLM could prove helpful in the sense that it shows you what to look for in a given subject matter. The generated answer can easily be verified.

a year ago

dalmo3

Gell-Mann AImnesia?

a year ago

arroz

Congrats on spreading misinformation

a year ago

MrLaheyyy

[flagged]

a year ago