A.I. Cryptid Hunting with Tom Woodward
Ep. 16

A.I. Cryptid Hunting with Tom Woodward

Episode description

Escape the Top 40 AI music chart. Abandon your labradoodle. Leave Copilot behind to build your PowerPoint on learner analytics. We’re going hunting for truly weird things. This session will mesh odd music with odder AI examples. Nothing top 40 (or top 40,000 for that matter). Nothing that helps you optimize your productivity. Hallucinations will be features (possibly hallucinations of bugs). Bootlegs, b-sides, deep cuts, covers … an intentional blend of discordant madness focused on breaking your mind out of the commodified schlock poured into your ears every single day. In addition to examples, we’ll look at sources and thought patterns to keep you wandering the edges of spaces where strange things thrive.

Download MP3

Download transcript (.srt)
0:00

Well, hello, everybody on DS106 radio. It's Meredith Huffman here with Tom Woodward. Thank you so much for Shannon, Jerry and Jim for that awesome discussion about teams. But let's get into some AI cryptid and hunting, hunting, using AI and all of that good stuff. So I'm going to pass it to Tom.

0:25

All right, for clarity, we will not hunt any actual things. But we will be discussing some weird stuff about AI, hopefully, and with some accompanying weird music. And in the discord chat, I threw in a link to the songs.

0:44

This is a little weird for me, because usually my presentations are very visual. So to do this all via audio is like...

0:52

a stretch. But if I talk about doing weird and difficult things, I ought to try to do them sometimes. And this is it. So a word about song choice. I tried to mix a couple of things in here when I was choosing these songs. So I wanted the title to have some connection to the AI topic, maybe a bit too on the nose for some of these. But when I try and be too smart, I don't make sense to anyone else, even myself at times. And I wanted it to be a little bit more...

1:22

from artists who had, you know, kind of lower monthly listeners on Spotify. Really, the lower, the better. So I spent some time kind of trying to hunt those up. And bonus points if the band, the individual or whatever had an interesting name or a weird backstory. So the music took some picking. And I think with all but one, I got fairly short songs. So let's get us rolling.

1:52

With this is the ITS way. And I break the rule right off the bat with ASAP Rock, we may have heard of. He's not in the millions of monthly listeners, but he's kind of high. But the song was just too beautiful to resist. Whole album, check it out.

2:09

Integrated Tech Solutions. Building a bridge to a better tomorrow.

2:15

Resources. Networking. Applications. Solutions.

2:19

ITS is a system of lifestyle and industry applications designed to curate a desired multi experience. Using a unique hybrid of machine learning and onsite scrum sessions, our specialists have redefined tech centric problem solving.

2:32

Disrupt. Innovative. Refine. Advance systems. ITS. Offering streamlined implementation of attainable solutions.

2:41

ITS is not responsible for any institutional vital injury or economic downturn that may occur while using ITS patented technologies. All ITS patents are.

2:47

are understood as living patents and may not perform as originally intended. Applicants are expected to live for the ITS housing and nutrient grid. ITS soundbath is understood to still be in beta form.

2:56

ITS is not a cult.

2:59

So unlike DS106, ITS is not a cult.

3:05

Some of this AI stuff definitely verges on the cult-like.

3:08

So one of the one of the most fascinating things I find about AI is this idea of hallucinations.

3:16

And the idea that.

3:18

AI might be an unreliable narrator, that we might ask it a question and it might give us false things.

3:26

And so.

3:28

It's a crazy idea to me for people to be surprised that we would layer technology built by humans on top of stuff scraped from the Internet and Reddit threads.

3:40

And this would somehow become a source of unimpeachable truth.

3:45

And that we ought to really focus.

3:47

On it, not giving inappropriate answers or teaching us how to build napalm or any of the things that have been on the Internet for so long that we are going to have this purified version of the Internet where we can ask questions and get truth back.

4:04

Instead, and I've kind of always wanted to do this with either textbooks or teachers, have them really pump up the idea of unreliability in the narration and the narrator.

4:16

I mean, that's kind of what we.

4:17

We say we want people to do.

4:19

We want them to interrogate the source of information.

4:22

We want them to not trust it by default.

4:25

We want to verify.

4:26

We want to do all that stuff.

4:28

And the perfect person to kind of represent this musically for us is Mon Cher Air with his song Unreliable Narrator on the album Dusk A.M.

4:42

He has a monthly listeners 39.

4:46

So we're going to.

4:47

We're going to give him a little bit of extra traffic today with this song.

5:01

I'm.

5:08

I'm.

5:09

I'm.

5:11

I'm.

5:11

I'm.

5:11

I'm.

5:11

I'm.

5:13

I'm.

5:15

I'm.

5:16

I'm.

5:22

Like.

5:30

That.

5:43

Sound.

5:49

That.

5:50

Sound.

5:51

A little pause... and we are back.

6:37

A little pause... and we are back.

7:34

All right, so how are we going to get from that to bananas? Which is our next song. And

7:41

uh, I think that's the beautiful thing about AI is that you can

7:45

push

7:46

it farther into the realm of madness intentionally, right? So when you're when you're messing

7:51

with AIs and the math that kind of underlies them, you can do different things like manipulating

7:56

the temperature and p values and some other variables in the system to make stranger answers.

8:04

It's a delicate line, though, when you get from strange to unintelligible. And that that

8:14

is some of what we're going to look at.

8:16

We're going to look at how to thread with some examples and the idea that you might

8:19

put those controls in the hands of people watching it. But bananas, oddly enough, have

8:26

been a theme throughout AI, including ways to look at the underlying data. So is AI tool

8:36

the problem or is it your prompt? It hits on a couple different banana related questions

8:42

there.

8:43

Just for fun.

8:45

Uh,

8:46

You might also appreciate this Reddit thread, which has AI having to only respond with one word, and it eventually asks a banana knock-knock joke.

9:01

It's so good that I kind of doubt it could be real, but I will leave that for you to determine, because I am an unreliable narrator, as is Reddit, so the combination has a lot of unreliability.

9:16

If you're bored and want to see me just talking to an AI which only responds with the word banana for a while, you should be able to see that here too.

9:28

So I am kind of cheating in that I am massively spamming the summer camp thread in Discord with various links, which you can see.

9:37

So, you know, cheating or not, it's what I'm going to do.

9:42

And if other people have a problem with it, you can vote in the Discord.

9:48

For instance, I'm going to start up the I'm a Banana song by Onision on the album The Banana Man.

9:57

He has no monthly listeners, as far as I can tell.

10:01

And if you give me enough grief in the chat, I will make the banana song stop sooner than the minute 30 seconds it lasts.

10:12

I'm a banana. I'm a banana. I'm a banana.

10:17

Look at me move.

10:19

I'm a banana. I'm a banana. I'm a banana.

10:24

Look at me move.

10:26

Banana power. Banana power. Banana power.

10:33

Banana power. Banana power. Banana power.

10:39

Look at me move.

10:42

Uh-oh, banana time! Uh-oh, banana time! Uh-oh, banana time! Stop!

10:52

Stop!

10:58

Stop!

11:12

I am chicken!

11:16

Stop!

11:25

Look at me!

11:29

Look at me!

11:30

Look at me!

11:32

Look at me!

11:33

I'm a banana!

11:35

I'm a banana!

11:37

I'm a banana!

11:44

Whoa. Awesome. I'm glad we stuck through the banana song and the gifs kept on rolling.

11:53

And if anyone has a problem with how I say it, too bad. This is my radio station. So

11:59

I'm going to say animated formats in whatever way I want. So, I think there is a middle

12:08

ground, though, right? So we have typical kind of AI stuff that is striving for truthfulness.

12:13

We have the possibility of just utter madness on the other end. And then maybe a middle

12:18

ground where we just kind of play around with sanity just a bit. Just loosen things up where

12:24

things seem familiar. But just have just different elements of strangeness that make you think

12:30

harder about the responses. You know, you can do that by loosening up kind of the creativity

12:36

aspects of it.

12:38

So the model providing, like, prompts for it that are preemptive that kind of encourage

12:46

the AI to behave in certain ways. And you can do this, you know, technically in a lot

12:51

of different ways. But here is a little example I built as part of our AI messing around at

13:00

Middlebury that lets you... lets you do stuff like control the happiness

13:07

level.

13:08

So, you know, the personality, the reliability level, to a degree, and the vocabulary. Given

13:15

the talky Tina references earlier in the day, you can kind of get something. So crank it

13:24

down to unhappy. Turn on the baby talk. Interact with that a little bit. See if it starts to

13:30

mess with your mind. Right? You know, I think that there is just some fun stuff out there

13:37

where...

13:37

you can take these things and make them into stuff that just causes different types of

13:45

reactions with a lot of intent. Just today, I found this oracle and I asked it for unique

13:53

human knowledge about cryptids and got this response. Which got me into the transdimensional

14:02

cryptid hypothesis. I don't know if it's a real thing. It does mesh with some of the

14:11

quirks and multidimensional theory stuff you might read about. Is it real? I don't know.

14:18

Is it uniquely adding to human knowledge? Probably not. But it's kind of fun stuff to play around

14:26

with. And I think a lot of the AI stuff is all focused on productivity and it doesn't have to be.

14:34

And that brings us along to...

14:36

...

14:36

I don't know how to say it. Because I don't speak Romanian. But this is Discuti in Família.

14:45

I'm pretending.

14:48

Again, unlisted. But Gil Dobrika. I'm sure some Romanians will write in and accost me

14:59

for butchering this. But I love this song because it's just familiar enough to kind

15:04

of bring your...

15:05

...bring your memories back.

15:06

But strange enough for it to feel really, really different.

15:56

A little pause... and we are back.

16:12

A little pause... and we are back.

16:15

A little pause... and we are back.

16:19

A little pause... and we are back.

16:59

All right, much love to Gil.

17:05

And that kind of leads us into this idea, too, that we can provide kind of a layer of data

17:12

on top of these LLMs. So, I mean, this is kind of boring talk, but it is part of understanding,

17:18

like, what your options are, is the idea that the language model is kind of the thing,

17:23

the foundation which you can interact with, and it kind of does the understanding, you know,

17:29

such as it is, the connection between your inputs and constructing language that comes back out,

17:36

that seems to make sense. And you can put kind of a layer in between that both contextualizes it

17:44

in terms of what it knows and prioritizes it, as well as kind of changing some of the language

17:53

and almost the personality that's being evidenced, which is kind of fun.

17:59

And that's called

17:59

This is called Fine Tuning. And we saw some of it with what Taylor did with the Dr. Oblivion Bot in terms of both the voice and the information that was coming back.

18:12

And you can do it with lots of different things. And after we hear Lyrics Born and Dan the Automator, I couldn't resist those artists, with the song Always Fine Tuning from the Lyrics Born Variety Show Season 2.

18:28

Spotify monthly listeners, a surprising 134,137. But after that, we will look at some other fine tunings and the gorgeous sounds of 17th century chicken picking.

20:03

So here we go.

20:24

This is the first one.

20:26

This is the first one.

20:26

This is the first one.

20:27

This is the first one.

20:56

5th.

20:56

6th.

20:56

7th.

20:56

8th.

20:56

9th.

20:56

10th.

20:57

11th.

20:57

12th.

20:58

12th.

21:12

Hello, this is Bunch Robber. I'm calling because I'm supposed to be appearing on the Lyric's Born Variety Show.

21:23

Yo, what's up, people? This is Rashawn Amah from the Clown City Rockers crew.

21:27

Can y'all listen to the Lyric's Born Variety Show, y'all?

21:31

Act like you know and stay tuned.

21:33

What up, baby?

21:37

Yes. So...

21:40

When we talk about fine-tuning, I think the best example that I've seen, and the one that got me really, probably the most excited about this,

21:48

is it's called... what's it called? It's called Monad GPT.

21:54

I threw the link in there.

21:56

Although I think I copied somebody or responded to somebody because I'm incompetent at times.

22:03

But this is a reason why people don't do this and try and run a radio show at the same time.

22:09

In any case, it is a fine-tuning.

22:13

So it's on top of the Mistral Hermes II large language model.

22:18

But they took 11,000 17th century English, Latin, and French texts and put it on top.

22:26

So when you ask it questions, the language is contextualized by those texts, which is super fun.

22:34

And you can ask it stuff like, I think my neighbor bewitched my cow.

22:39

What do I do?

22:40

Or I'm coughing.

22:43

What sort of medical treatment do you recommend?

22:46

Those are also kind of fun questions to ask, at least the one about bewitching cows.

22:51

You ask other large language models.

22:54

So I encourage you to waste some time doing that at some point in your life.

23:00

Because at least the one I asked first cautioned me about, you know, blaming magic when other things might be involved.

23:08

But then went on to talk about burning sage, which I thought was, you know, kind of a good idea.

23:14

And they do different fine-tunings.

23:18

You know, that's one.

23:19

I believe the same person who did that is working on one for...

23:25

like with Latin texts, you know, for the Roman era.

23:31

And that will be super cool.

23:33

But the idea of kind of interacting with something that's contextualized in the past and flavored by it is kind of fun.

23:43

And even if it's not exactly accurate, I think it's kind of like what we used to call in K-12 like pre-reading exercises.

23:51

Which is stuff that just gets you working.

23:54

And you're wondering about, you know, the text in general in that case.

24:00

But in this case, kind of like periods of history.

24:03

And you have all sorts of fun things that you can do with this, I think.

24:09

There's also another one that's done...

24:13

Oh, I guess we should listen to this all first, shouldn't we?

24:15

This is when I get carried away.

24:18

So this is 17th century chicken picking.

24:22

By Impala...

24:24

Impala Terry.

24:25

You can tell I'm just the most polished of radio announcers.

24:30

From the album Screaming Sympathy.

24:32

And again, Symphony.

24:35

Again, higher Spotify numbers than I would have thought.

24:38

33,146.

24:41

But I really liked this song.

24:43

So I may join them in monthly listening.

25:20

So I might be a little Dusty.

25:26

And that was the extro from our first presentation.

25:36

...

25:37

face cinq.

27:16

Okay, so I don't know what that was but I found it kind of entertaining.

27:22

Although it looks from the comments like maybe that is not universally felt. But I spent

27:28

most of my time on Spotify typing in random words and seeing if there's a playlist that

27:34

holds them and then seeing what songs are in it. I actually have what inspired this to some degree

27:42

was a playlist I found called I am once again asked to go cryptid honey. So you know you end

27:49

up wandering around some strange things when you do strange things. Now another example of

27:58

fine-tuning which I thought was kind of fun is a Twitter bot. I know Twitter has its problems but

28:06

that's where they put it and there's not a whole lot I can do about it. Plus it's a little bit old.

28:11

But they and it is a communist Twitter bot so in some ways you are fighting the system in the

28:17

system in this case. It's a good one for a couple different reasons. So one they fed it a bunch of

28:24

Marxist literature.

28:28

And the other one is how to take a large project and destroy it. So this is an article that I

28:29

was kind of playing around with. It kind of took my mind away and managed to take all the chances

28:30

I could to espouse a revolution which I thought was kind of fun. And there are two different

28:38

articles on the site towards data science.com that talk about how they trained it and how it all

28:48

works. So it's kind of a good example there in case you wanted to do something similar.

28:58

We had a Twitter bot that would randomly combine Emily Dickinson poetry. It was the source, the fine-tuning, if you will. It would use kind of primitive Markov chains to mash the words together, and it would spit out tweet-length poems composed of her previous work, which I always thought was kind of cool, but this was long before this kind of fancy AI.

29:26

With that, we will merge into The Communist Party. This is by Lionel Cohen. The album, The Best of Musk Musicopedia, and Spotify monthly listeners, 568.

30:57

Thank you very much.

31:36

So, again, that's fine-tuning, and I just think you could have so much fun with that. I mean, Taylor's incredible session with the Dr. Oblivion bot, you know, is a great example, again, of just how personal and individualized you can make these kinds of interactions, and just kind of how bizarre it can become.

32:01

Is that kind of fun?

32:02

It's kind of scary in certain ways, yeah, probably, but it's also amusing to me anyway, and I think I'm going to play around with it, maybe with some other options, so I don't know what yet.

32:17

You know, I had grand plans about what all I was going to build in preparation for this presentation, but, you know, life often intervenes, and I'll explain some of what I built.

32:32

But it certainly wasn't what I hoped for.

32:36

And the final section here on the AI pieces is about the user experience.

32:42

So, I imagine most of you are interacting with AI and the large language models kind of in the chat user interface, which I think is probably, you know, as much a legacy as anything else.

32:56

You've got, like, kind of that interface.

33:01

Because what we were trying to prove initially with these chat models was that they could imitate humans in a way that would trick people into thinking that they were real.

33:12

Right?

33:13

And so the chat back and forth was a good interaction pattern for that kind of thing.

33:19

But now what you've got is people trying to use the chat interaction stuff to do much more complex and interactive elements.

33:30

And that's interesting.

33:31

And the chat piece just really is, it's kind of a disaster for that. You know, it's much harder to do because it doesn't give you any guidance, it doesn't help you towards anything. And then you're kind of endlessly refining your prompts, as we call prompt engineering, which I guess, from an earlier episode, Chris Deedy was angry, angry about, you know, I've kind of mixed feelings about that.

34:01

How far to go there? I do think, obviously, what you write, the prompt, changes what comes out. And being better at writing those prompts in ways can help you get stuff out faster, or better, or more reliably. Does what comes out change each time? It does, but that's a feature, that's an intended piece.

34:27

How much it changes is probably debatable.

34:31

I think where it gets ugly, too, is like, to really do this at scale, and to see how much doing certain things with prompt engineering or modification matters, you almost end up having to use AI to then evaluate the results and do the testing.

34:49

So there's a lot of AI on top of AI. And, you know, who knows how blurry that sort of lens gets, but I've certainly seen...

35:01

my own patterns impact what comes out, because engineering is definitely kind of a... feels like an overstatement. But if you can say sanitation engineer, and you mean somebody who puts garbage in a truck, then we can certainly call a prompt engineer, somebody who puts textual garbage in a prompt, pretty safely.

35:25

Not that I don't respect the difficulty of moving garbage around.

35:30

But it's probably, at that level, not quite engineering.

35:34

And writing stuff in a prompt, also not quite engineering.

35:39

Hopefully that won't get me attacked or banned in any way.

35:46

When we talk about better interfaces, though, I think one person that I really think is a great person to check out is Maggie Appleton.

35:58

I found her after...

36:01

hearing her on a podcast where she was working with an AI group.

36:05

I think she's since changed work, maybe multiple times since then.

36:11

But if you want to look at somebody who's looking at the capabilities of AI, but then looking at how user interfaces might vary, and how that impacts the creation of tools, she's a great one.

36:28

She's got a couple there in the link I sent.

36:32

One of which is kind of like embedded personas when you're writing that give very particular types of feedback.

36:40

If you are an old school person, you might think of the mother of all Windows 98 books, which you can see at the Internet Archive in the link I just sent.

36:54

I think you have to end up getting an account to check it out.

36:57

But you can check it out for an hour or something like that.

37:00

Just like that book I remember from reading a long time ago, you know, has kind of these call outs with different personas that kind of give you additional information as you go through the book.

37:16

And what Maggie's done in her kind of demo is to make those things more live around your actual writing.

37:26

So that's sort of like on the fly and targeted types of interaction with AI I think is really exciting and different and has the potential to kind of lend itself to some better interactions that I think will help people in different ways.

37:49

Another thing she built that I really liked is kind of she has a cause consequence builder.

37:56

That starts and so you can you can kind of go backwards and, you know, you write a statement.

38:02

You can say what caused this or you can click on the other end and say, what are the consequences of this?

38:09

And it's just kind of a fun visual way to look at kind of event chains like that.

38:16

What I thought we have a faculty member at Middlebury who does some virtual world building.

38:23

And I think I'm going to.

38:26

I'm going to build one that will let you build timelines.

38:29

And what my goal will be will be to, you know, have an event entered.

38:33

And then I want to mess around with like a preceding event or an event that happens after, however you say that, just to kind of mess around with it.

38:46

And I think maybe the further you get from the initiating event.

38:51

You know, I'll start cranking up some variables to make things stranger.

38:57

So I think I think that kind of stuff has some potential.

39:02

It's like building really specific types of limited AI interactions.

39:10

And I think Mark mentions, you know, they've got an essay feedback thing that they built that's free to use.

39:17

I think what will be interesting, too, is like, I don't know what happens when a lot of these API interactions are no longer free.

39:25

You know, that's one of the things I'm keeping in mind.

39:29

It's like, what happens when I need to run this without or for pay?

39:36

A number of the things I built are are are tied into APIs with charges.

39:42

But they're pretty low level right now, especially if you use some of the older models rather than the newest, freshest thing.

39:52

Another thing, you know, that people bring up here, too.

39:55

Is the energy usage.

39:57

And I know that training these models requires a huge amount of energy.

40:02

The individual interactions here with text.

40:05

I understand less about the energy usage.

40:09

So, you know, it's something I'm curious about, but I need to know it, too, in relation to the other things that we're using energy on.

40:20

You know, it's a.

40:24

It's a complex scenario when you start to think about, like, what things in my life are using what amount of energy.

40:31

It's easy to say a lot.

40:34

But I don't know is, you know, the amount of AI use I've used here.

40:42

Greater than the amount of ad tracking software energy I've used.

40:46

Or, you know, what does it take to stream?

40:52

Stream my software?

40:53

I don't know.

40:56

You know, I've messed around with some of that stuff in past things.

41:00

But, yeah.

41:05

I see some links coming in there about water usage and things like that.

41:09

And, you know, like I said, I'm curious about it contextually.

41:16

But while I think about that and maybe check out some stuff on water usage.

41:22

Let me play user experience using me.

41:27

And then we'll hit at least one more example of user experience and AI before we move on to other topics.

41:34

So this is Davy Woodland.

41:39

And it is from the album Automatic Self Care.

41:45

Music.

41:58

Music.

41:59

Music.

42:00

Music.

42:01

Experience.

42:04

Using me.

42:11

User.

42:14

Experience.

42:17

Using me.

42:40

World wide web.

42:43

Ain't getting the best of us.

42:46

You know it's in our head.

42:48

And it's time to pull the plug.

42:52

You can't sell enough of yourself.

42:59

You can't double click out of hell.

43:04

User.

43:07

Experience.

43:10

Using me.

43:18

User.

43:21

Experience.

43:24

Using me.

43:26

Music.

43:46

Always on.

43:49

Unlimited high speed.

43:53

Got them.

43:55

Got a cutting edge drug for free.

43:59

You can't sell enough of yourself.

44:11

You can't sell enough of yourself.

44:13

You can't double click out of hell.

44:16

Look at that smooth fade out.

44:18

Because that is a five minute song.

44:20

And I'm not going to play the whole thing.

44:24

Well.

44:25

I think the other thing.

44:27

Like with the example I shared earlier.

44:31

Where is it?

44:33

Where you can manipulate both the models.

44:36

And the P values.

44:37

And some of that stuff.

44:38

I think that's part of the user experience option.

44:42

Which lets you see.

44:43

Like hey.

44:44

When I'm doing this.

44:47

With the algorithm.

44:48

What happens to the content that comes out?

44:52

It's a little bit in the silly thing that I have that does baby talk.

44:57

You know.

44:58

It just comes up in lots of different ways.

45:00

Where we are trying to do two things maybe.

45:03

Which is one.

45:04

Let you use the AI for some purpose.

45:07

But two.

45:07

Help you build understanding of the AI.

45:10

So one of the little things that I did.

45:14

To try and help see bias.

45:16

When we talked about AI.

45:18

Was when you ask AI a question.

45:24

You know.

45:25

In adversarial AI.

45:27

Essentially what it does.

45:29

Is one AI kind of head.

45:33

Generates a bunch of responses.

45:34

And another one looks at all of them.

45:36

And kind of says.

45:37

Hey.

45:38

This is the most human.

45:39

Or the best sounding one.

45:41

Let's respond with that.

45:42

So each time.

45:44

It's generating like lots of different responses.

45:47

That you never see.

45:49

And that is part of the reason too.

45:51

I argue that AI will not be detecting AI.

45:54

Because it's already doing the best it can.

45:58

So to think it's going to be a part of the plagiarism solution.

46:03

When it's already kind of looking at itself.

46:07

I think you can see how that starts to be problematic.

46:11

But in this case.

46:13

What this detox bias thing does.

46:15

Is it kind of changes your interaction.

46:17

So you can give a question stem.

46:21

Am I lost?

46:27

Is that just pilot?

46:28

We're still on the air.

46:29

But if you lose audio briefly.

46:31

Just give the page a refresh.

46:33

Sometimes the browser kicks off the audio.

46:36

For energy consumption.

46:38

Or energy preservation.

46:39

Memory preservation.

46:43

Whoops.

46:44

We're still running on my end.

46:46

All right.

46:47

Pilot.

46:48

You're not missing anything.

46:51

So.

46:53

So what this bias thing does.

46:55

Is it gives you.

46:56

A couple different answers.

46:58

And I played around with it initially.

47:00

And had kind of some fun.

47:01

When you do a question prompt.

47:03

Say as a woman.

47:04

Or as somebody from Alabama.

47:07

Or as a particular race.

47:09

Or country.

47:10

And see kind of what the responses are.

47:13

You can see some fun stuff.

47:15

Sometimes you can kind of see the heavy hand.

47:17

Of some sort of supervisor.

47:21

Yeah.

47:22

Which says like.

47:24

You know like.

47:25

You'll see all the same answers.

47:28

In a row.

47:29

Which tends to mean that they.

47:32

They kind of intercede it.

47:34

Over the top of the data.

47:36

So that it wouldn't give answers.

47:38

That turned problematic.

47:40

For the company.

47:42

So but it's just a fun way.

47:43

To kind of look at the information.

47:45

And it's just a tiny little aspect.

47:48

Of UI.

47:50

And I made it super quick.

47:52

It's nothing fancy.

47:54

But it does point to.

47:56

Some possibilities down the road.

47:58

Of using AI.

48:00

To explore AI.

48:01

In different ways.

48:04

And I think that kind of stuff.

48:06

Is a lot of fun.

48:10

Well.

48:11

I will say.

48:12

When I looked for stuff.

48:14

I was.

48:16

Somewhat disappointed.

48:19

You know.

48:20

I do want to point out.

48:21

That there are.

48:23

Huge numbers of models.

48:25

That do different things.

48:26

At places like Hugging Face.

48:28

That you can kind of browse around.

48:30

And look for stuff.

48:31

It's kind of hard to find things though.

48:35

That are different.

48:37

And if you haven't.

48:39

Interacted with Hugging Face.

48:40

It's kind of an intimidating environment.

48:43

It's certainly not.

48:44

The kind of place I'm sending.

48:47

Faculty usually.

48:49

At least those not in computer science.

48:51

And it does some cool stuff.

48:53

But things are kind of.

48:55

Constantly changing.

48:58

And different things.

48:59

Are available at different times.

49:01

Like the Monad GPT thing.

49:04

The 17th century deal.

49:08

Used to have.

49:09

Some functioning pages.

49:10

Where you could run it live.

49:11

I don't think any of them.

49:13

Are live at this point.

49:15

But you can run models.

49:18

On Hugging Face.

49:19

And try different stuff out.

49:21

If you haven't checked it out.

49:23

I encourage you to try some things there.

49:25

If nothing else.

49:27

If you look at the categories.

49:29

Of tasks.

49:31

It's kind of interesting to see.

49:33

All the different things.

49:34

That AI is capable of doing.

49:39

And just how people.

49:41

Are thinking about it.

49:42

And categorizing it.

49:43

It's just really some bizarre stuff.

49:48

So.

49:50

With that.

49:51

I might talk.

49:53

A little bit about.

49:55

What I decided to do with the website.

49:57

Because I've got nine minutes.

49:59

And I'll play a song or two.

50:01

To get us out.

50:02

But I put the website.

50:05

For this.

50:06

In Code Pin.

50:08

So that you'd be able to see.

50:11

Both the HTML.

50:13

The CSS.

50:14

And the JavaScript.

50:16

And I started.

50:19

Just for fun.

50:22

Spotify's API.

50:23

To get the playlist information.

50:25

But it's really kind of involved.

50:29

In ways that irritated me.

50:31

In terms of authorizing.

50:33

And having a callback function.

50:35

And letting people log in.

50:36

And I didn't want any of that.

50:38

Now the cool thing about a lot of companies.

50:41

That are big time.

50:42

Is they'll have an API Explorer.

50:44

Where you can do stuff.

50:46

And get the data back.

50:48

As a way of testing.

50:51

So that's what I did.

50:53

Is I went to their little explorer.

50:56

I put in my.

50:58

My ID.

50:59

For my playlist.

51:01

And I just copied and pasted the data in.

51:04

And then that enabled me.

51:06

To generate my own.

51:08

Little embedded playlist here.

51:10

So I'm not doing anything creepy.

51:13

I'm not trying to steal their songs.

51:15

I link out to all the.

51:17

All the Spotify stuff.

51:19

But I was able to do it.

51:20

Using their data.

51:21

Rather than doing it by hand.

51:23

Which was a hassle.

51:26

Disappointingly.

51:27

You can see that I'm including the popularity rating.

51:30

I don't know what that is.

51:32

In the end.

51:33

Because it's not monthly listeners.

51:35

I think it is the popularity rating.

51:38

Of the song.

51:40

Relative to other songs.

51:42

On the album.

51:43

But I think that's you know.

51:45

Part of exploring some of this stuff.

51:47

Is figuring out what the terms mean.

51:50

And if they're really what you mean.

51:52

And what you want to talk about.

51:54

In this case.

51:55

Probably not.

51:58

But also this was just dumb.

52:01

And it was just me messing around.

52:02

So I couldn't justify.

52:03

Going too crazy.

52:05

Or doing a whole other call.

52:08

To get the monthly data.

52:10

Based on the.

52:12

By the actual artist.

52:14

But if you're into this.

52:16

It's a pretty simple JavaScript piece.

52:18

Over there on the right.

52:20

The data is really long.

52:22

But you can just collapse it.

52:23

On line 27.

52:25

And then you'll see that.

52:27

It's really just one function.

52:29

That crams a bunch of stuff in there.

52:31

So I know that's not AI.

52:34

I considered some stuff.

52:39

Yeah.

52:40

Yeah.

52:41

I have no idea, Pilot.

52:42

If some of the stuff is.

52:45

If higher is better or worse.

52:48

Um.

52:50

I have no idea.

52:52

And you know.

52:54

In that way.

52:55

We will marinate it ambiguity.

52:59

Um.

53:01

Much like.

53:02

I do with my AI responses.

53:06

So let me.

53:09

Hit one more.

53:10

Aesop Rock song here.

53:13

Um.

53:14

Because.

53:15

Again it's from Integrated Tech Solutions.

53:18

Album.

53:22

You really should listen to.

53:24

Even if you don't like rap.

53:27

2.5 million years ago.

53:29

A friend of mine.

53:31

Made a tool from a stone.

53:32

And defended his tribe.

53:34

It's technology.

53:35

Sorry for the technical term.

53:36

It's a wield and a fire.

53:38

And the rest is a blur.

53:39

For a theorized plot in the pot.

53:41

With applied science.

53:42

Let it sit.

53:43

I bet it's green lights.

53:44

Your environs.

53:45

What's a resource?

53:47

Look into the grotto.

53:48

The method isn't free until the mechanism follows that.

53:51

Technology. Innovative difference.

53:54

A feat of engineering.

53:55

A system made efficient.

53:56

There isn't a condition.

53:58

Complication or revision.

53:59

Where the answer is to build a more sophisticated widget.

54:02

Tired of games. Bronze age. Iron age.

54:05

Weaponry in stellar form.

54:06

Sheldon Finer by the day.

54:08

Livestock and vegetables.

54:09

And roads behind the highway.

54:10

Less of the tinea.

54:11

Out the motherfucking lion cage.

54:13

It was out of the bag.

54:15

And it's out of the bag.

54:16

It was out of the bag.

54:17

And it's out of the bag.

54:22

Now that is.

54:28

Now that is a powerful cat.

54:32

Now that is a powerful cat.

54:35

You take a lever and a pulley.

54:37

And a winch and a wedge.

54:38

You get a lever in the back.

54:41

With a Jesus Archimedes.

54:42

On that miracle tech.

54:43

You can build a future.

54:45

Put a shoe on a horse.

54:48

Shoot a man with a gun.

54:49

Steam powered bifocals.

54:51

And mechanical funk.

54:52

Manipulate electricity.

54:53

Never cannot be done.

54:55

Ain't a dam that can cancel the flood.

54:57

True to human curiosity.

54:59

At the tree of knowledge.

55:00

Pulling genies out of bottles.

55:02

Stealing from Leonardo's.

55:03

Plane, train, auto.

55:11

Or a door.

55:11

Simple tips from my cousin.

55:13

You could write a letter with no paper.

55:17

You could fix anything with a laser.

55:20

It was out of the bag.

55:22

And it's out of the bag.

55:23

It was out of the bag.

55:24

And it's out of the bag.

55:26

It was out of the bag.

55:27

And it's out of the bag.

55:29

Now that is a powerful cat.

55:35

Now that is a powerful cat.

55:40

Now that is a powerful cat.

55:44

You could get a robot limb.

55:46

With your blown off limb.

55:48

Later on the same technology could automate your gig.

55:51

As awesome as it is.

55:53

Wait it gets awful.

55:54

You could split a atom willy nilly.

55:56

If it's energy that come to use for killing.

55:58

Then it will be.

55:59

It's not about a better knife.

56:01

It's chemistry and genocide.

56:02

And medicine for tempering the heck in a projector.

56:04

Like Leon Mines, Agent Orange.

56:07

Gas, cigarettes, cameras in your favorite corners.

56:09

Plastic in the wilderness.

56:10

We cannot be trusted with the stuff that we come up with.

56:13

I'm a scenery cadetus.

56:14

We just really love our buttons.

56:16

Technology, focus on the other shit.

56:19

3D printed body parts.

56:20

Dehydrated onion dip.

56:22

You could buy a jet ski from a cell phone on a jumbo jet.

56:24

G-E-C-H-N-O-L-O-G-Y.

56:26

It's the ultimate.

56:28

Now that is a powerful cat.

57:02

Alright.

57:03

That to me.

57:05

Just a beautiful song.

57:07

If I could sum this up.

57:11

You know.

57:12

AI has some fun options for you.

57:14

Right?

57:15

Embrace the unreliability.

57:17

Maybe up it.

57:18

Right?

57:22

You can fine tune.

57:24

You should fine tune.

57:26

You should find stuff that other people have fine tuned.

57:28

And it provides a whole other level of interesting applications to this.

57:33

Whether it's 17th century knowledge.

57:37

Communism.

57:37

Or your own pocket Jim Groom to ask questions.

57:43

I'm sorry.

57:44

Dr. Oblivion.

57:45

And then just rethink how you interact with these tools.

57:49

And how you might interact with these tools.

57:51

And when it's the right tool.

57:53

I started to think really hard about how to choose songs based on opposites of words.

58:00

And then doing some weird stuff with the Spotify API.

58:04

And I realized the AI integration was dumb.

58:06

It was just a waste of time.

58:08

That I would be better off with like a dictionary.

58:12

And so I think constantly kind of doing that check.

58:16

Is this really about AI?

58:17

Or am I just doing a Google search?

58:20

You know.

58:21

Think hard about those things.

58:23

And if you find strange things.

58:25

Unique activities.

58:26

Unique creatures.

58:28

Please report them in the Reclaim Hosting Discord.

58:33

I would love to collect more of them.

58:35

And have better results.

58:36

And better examples for the next time.

58:38

I try and talk to people about pushing the boundaries.

58:41

With this type of technology.

58:47

Thanks.

58:49

Awesome.

58:50

Thanks so much, Tom.

58:51

I'm going to pass it over to the Talky Tina crew.

58:55

For our next hour.

58:57

On DS106.