Crazy Until It's Not: Startups, Venture Capital & Big Ideas

Machines will be truly creative | Jonas Andrulis | Aleph Alpha | firstminute capital

firstminute capital

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:17

Welcome back to Crazy Until It's Not, a podcast about big ideas and the amazing people behind them. 

Every episode we bring on a new guest who has a prediction for the future, which on the face of it, sounds a little bit crazy. 

I'm your host, Michael Stothard, an early-stage tech investor at firstminute capital.

And today I'm joined by Jonas Andrulis, the Founder and CEO of Aleph Alpha, the generative A.I. Company from Heidelberg, which has built a generative A.I. model that is going up against the likes of Open AI.

Listen to this episode if you want to know about:
- How generative AI is changing the world.
- Can Europe compete? 
- The fractal nature of time
- How creativity will be done by machines. 


00;00;00;01 - 00;00;08;00

Jonas Andrulis

For many of the high value use cases. Basically just saying, trust me, bro, Microsoft's not going to be enough.


00;00;17;02 - 00;00;46;26

Michael Stothard

Hello and welcome to Crazy Intel. It's not a podcast about big ideas and the amazing people behind them. Every episode we bring on a new guest who has a prediction for the future, which on the face of it, sounds a little bit crazy. I'm your host, Michael Stothard, an early stage tech investor at First Minute Capital. And today I'm joined by Jonas and Jules from Olaf Alpha, the generative A.I. Company from Heidelberg, which has built a generative A.I. model that is going up against the likes of Openai.


00;00;47;07 - 00;00;49;00

Michael Stothard

Thank you so much for being here, Jonas.


00;00;49;13 - 00;00;50;03

Jonas Andrulis

Great to be here.


00;00;50;09 - 00;01;01;05

Michael Stothard

So my first question is, could you just tell us what you're building and how it compares to Openai, which now it's very famous, everyone's heard of What are you doing and how does it compare to them?


00;01;01;19 - 00;01;35;16

Jonas Andrulis

Yeah, I mean, the kind of big word everybody is using is large language model. It goes a little bit beyond that. But at the core, that is the innovation that is carrying a lot of this. What we're seeing today, and this is what we started to build when I left Apple in 19 and we started with and this was even like before GBG three was on the market, we started with like a generative language model that was able to mimic the speech patterns of Chancellor Angela merkel that was kind of one of the first things we built.


00;01;36;02 - 00;02;01;00

Jonas Andrulis

And then we invented multi modality. We have the like a very similar multi modality that what we're seeing for CBT four we have not had that since like 21. And so it's basically like a world model. So I like to think about these systems like general, general intelligence, like generative, that have learned and can use the structure of our world.


00;02;02;29 - 00;02;22;05

Michael Stothard

And how does it compare to open? I mean, it's like fundamentally similar, but if you were going to pick out some differences and maybe some areas where it's better where you're kind of winning clients against them, what would you pick out?


00;02;22;29 - 00;02;51;27

Jonas Andrulis

Yeah, I mean, this used to be the multi modality which were kind of we were the only company that had that. And of course CBT for now has multi modality as well. And one of the things that we're super proud of currently is our level of like Explainability and trust, and we published a paper like a few weeks ago where we invented a method where we can trace all factual outputs to their sources.


00;02;52;08 - 00;03;25;22

Jonas Andrulis

So we can for, for everything that our kind of model is generating, we can always give you the, the source of this factual knowledge. And not only that, we can also give you instances, observations that conflict with this information. So you can basically see and we have mostly focused on enterprises and governments. So you can basically see for every complex output and question where in your knowledge phase, where in your data there is conflicting or confirming information about this.


00;03;25;27 - 00;03;32;03

Jonas Andrulis

And this is precisely the kind of context you need as a human to take responsible for the outcome.


00;03;33;08 - 00;04;01;06

Michael Stothard

That seems like a wild leigh necessary improvement if you're going to do anything real with these models because you can't have something that's right 98% of the time or is right ish from one point of view, you need something that always is those which are where you can actually understand the context. Why don't Openai and other models do this right now?


00;04;01;29 - 00;04;26;16

Jonas Andrulis

Well, I mean, I'm sure they're working on it. And it took them two years to copy our multi modality. So let's see if it's gonna take them two years to copy explainability and trust features. I absolutely agree with you. I mean, this is crucial for many of the high value use cases. Basically just saying, trust me, bro, Microsoft's not going to be enough.


00;04;26;26 - 00;04;49;08

Jonas Andrulis

Even if your answer would be correct, which we've seen many examples where the activity for toxicity is not correct and is kind of hallucinating, is inventing things. But even if it were correct, we need more than just the correct answer. We need all the context. We need kind of all the information that humans would require to trust the A.I..


00;04;49;17 - 00;04;59;17

Michael Stothard

Does your model make stuff up place or is it just on the rare instances where it does make stuff up? It's it's it's easier to detect because it's sourced.


00;05;01;18 - 00;05;23;23

Jonas Andrulis

So the core model does not make stuff up less. So the core model is can I remove all this kind of functionality and, and let's just look at the core model. The core model behaves very similar to what a GPT three does. So it kind of makes up stuff about the same as as can all other models actually do.


00;05;24;15 - 00;05;52;24

Jonas Andrulis

But our explainability feature is actually a modification on the attention mask, so it's not something that lives in the application layer, but it's kind of a core change of the model architecture. And what we can do with that is we can for every output candidate like these models, work by giving you not just necessarily one possible answer, but a set of answers, a group of answers.


00;05;53;13 - 00;06;32;02

Jonas Andrulis

You kind of see those when you interact with Chachi, but not in you asked the same question. You got like different answers, right? So there's always like a set of possible answers. And what we can do, what we're doing here is that we can now trace the factual input in those answers to their sources to prove, and we can use that technology now to basically just prevent, if that is so required to prevent any output that is not backed by information output, that's not backed by information or like bio information that we have on file could be hallucinated, could also be common knowledge.


00;06;32;05 - 00;06;37;27

Jonas Andrulis

It could also be correct. But it is in any case, it is information we cannot provide context for.


00;06;38;01 - 00;06;50;09

Michael Stothard

Can we talk a bit about sovereignty? Does it matter that you guys are European? In a general survey, our landscape that is dominated by the US players.


00;06;52;16 - 00;07;21;25

Jonas Andrulis

It matters a little bit for some people, like we're working with governments and of course for them it matters. But for many of the European enterprises we're working with, it is not that important. They care about sovereignty a lot and they care about the fact that our technology can run on premise, can run in any cloud environment, can run on different kind of semi conductors.


00;07;22;00 - 00;07;37;29

Jonas Andrulis

So these enterprises, they worry about sovereignty, they worry about protecting their strategic position and kind of building into the future. But they don't necessarily worry about our zip code or where our headquarters are.


00;07;39;12 - 00;08;08;15

Michael Stothard

In terms of the data, though, that your models are trained on, is it more or less like geographically similar to the kind of more the kind of data that other models are trained on? The thrust of my question like, is that just like a slightly Californian vibe to these these, you know, the openai models that wouldn't be there with yours, or is that just completely the wrong way to think about how data is sucked up for these models?


00;08;10;11 - 00;08;42;14

Jonas Andrulis

So I think I really like how you how you put that. And absolutely that is a very Californian y. This is not just about data selection, but let's start with data selection. We trained our model in like natively in five languages, which is like German, French, Italian, Spanish and English. And just based out of that, the cultural knowledge, the kind of the kind of context that our model has learned to work with is naturally like a little bit more European.


00;08;42;29 - 00;09;18;11

Jonas Andrulis

So I think that's of course, a kind of foundational difference which has led to a fact that we have one customer that is using GPT three for English tax and is using our model for for German and French text. So there's certainly that what we're seeing currently on the behavior of CBT is that for every kind of controversial question, we would get a very Californian answer, sometimes even like blank refusal to answer to do certain things or answer any questions in a certain direction.


00;09;18;17 - 00;09;47;21

Jonas Andrulis

And this is not just based out of like training data selection, but this is an intentional effort where kind of Openai and Microsoft, they basically build a huge data database and capsule thing that preferences encapsulating their their values and their ideological kind of ideas on how the system should behave. And this is kind of where because we are not B2C focused, we have a like a different philosophy here.


00;09;47;21 - 00;10;06;05

Jonas Andrulis

And we're focusing more on transparency and control. And if one of our customers, if they want to build an AI that's super rude and offensive and kind of just insult all their employees all the time, I think it's their responsibility and we're not technically preventing that from happening.


00;10;06;25 - 00;10;22;18

Michael Stothard

So the example of that, so this is this is part of kind of like GPT three Openai is too woke a kind of thing like yours, yours because because enterprise have control over how they build it. They can build it in the way in which they see fit. Is that right?


00;10;23;06 - 00;11;01;02

Jonas Andrulis

Yeah, in a way I'm like, I think building a product that is used by hundreds of millions of people worldwide with all the risk, with all the compliance, basically that that comes with. That is a difficult question to get right. So I'm not necessarily saying that Openai should have built it differently. I don't know how how to balance all these different values, how to balance liberal values and freedom of speech with safety and kind of a sense different sensibilities for offensive speech.


00;11;01;21 - 00;11;21;10

Jonas Andrulis

So that's not necessarily where I think I know better. But I think on enterprises, on governments, those are some of the best enterprises in the world. And I don't I don't think that I should force my own ideology on those on those customers. On those partners.


00;11;21;22 - 00;11;50;03

Michael Stothard

Yeah, that makes sense. And can we talk a bit about a bit about you? I love the story about like how you how you got to this place. And ultimately I'm interested in the journey, which I'm surmising here, but from from from geeky research in an sector that was not that exciting to suddenly a rock star being invited around the world and talking about the only thing anyone cares about.


00;11;50;20 - 00;12;00;06

Michael Stothard

I'd love to talk about that, that transition and what it feels like. But firstly, what what were you always into? I like, What kind of kid were you? Give me a little bit of the story.


00;12;00;13 - 00;12;26;24

Jonas Andrulis

Yeah, I was born in Berlin. In West Berlin, when Berlin was kind of still separated. That my parents left Berlin when I was still pretty, pretty young. And then I grew up, I had my teenage years in like a super Smallville lodge in a 250 year old house, like a very kind of ancient Irish scenic. And this is where I kind of learned to love nature.


00;12;27;01 - 00;12;51;28

Jonas Andrulis

I'm I still like very much in love with animals and nature. And also basically during that time, I fell in love with computers. My, my, my father is an engineer, so he had some computers at home and also some books about software engineering. So I started to kind of for my first computer, which was a spectrum set X, I soldered the power supply, a kind of fixed art.


00;12;52;05 - 00;13;23;14

Jonas Andrulis

And then I kind of started coding and like basic at the beginning I had a Commodore 64 and I coded something that was like a very similar to Tamagotchi on like a Commodore 64 way like an ASCII art based creature walking around on the screen. And so basically that's how I got into a software development. And I always basically and I wasn't that prevalent as a term, but I was always building smart systems.


00;13;23;14 - 00;13;45;10

Jonas Andrulis

I was working with Bayes networks, like random forests, other kinds of optimizations. The the kind of boat going under my node board. They kind of fractals and visualizations and stuff like that. So it's been basically always tinkering with computers and coding and thinking about what kind of worlds you can build with with software.


00;13;47;01 - 00;13;55;06

Michael Stothard

So what's it like? What's your life like now that this is the only thing that people care about on the global stage and you're at the center of it?


00;13;55;22 - 00;14;25;18

Jonas Andrulis

Oh, it's nuts. It's basically we are as a as a whole team. This is not just for myself as a whole team. We are being overrun by inbound. We're now at a level of visibility that I would not have expected, basically. And and of course, we are super happy about that. That kind of world's biggest enterprises are ringing our ball and kind of getting in line to kind of work with us and kind of share their ideas.


00;14;25;24 - 00;14;49;06

Jonas Andrulis

I think that's exciting. But of course it's also a risk because we like all this visibility, all this and the interviews and kind of shaking hands. This will not be what will kind of be necessary for survival, like when you kind of finger me like, let's say in like two years time, we still want to be one of the best AI companies out there.


00;14;49;12 - 00;15;03;10

Jonas Andrulis

And in order to get there, while at the same time the likes of like Google and Microsoft and many more are deploying massive amounts of cash. And this requires, I think, some smart moves and also some prioritization.


00;15;03;23 - 00;15;12;02

Michael Stothard

Yeah, that makes sense. What's the coolest thing you've got to do that you wouldn't have got to do before? Have you met anyone? Met anyone? CO been invited anywhere?


00;15;12;02 - 00;15;38;28

Jonas Andrulis

Call Oh yeah, absolutely. One of the so I personally many of the parts of a job of a CEO are not necessarily what I enjoy doing the most I so when you ask me like going to what I enjoy the most I enjoy what talking to our kind of brilliant techies, I enjoy talking to researchers, designing experiments, coming up with ideas.


00;15;38;28 - 00;15;58;27

Jonas Andrulis

So that's what I really enjoy doing. And traveling around the world and kind of being on stages. I don't mind. It's not horrible, but it's also not necessarily what I what I would pick if I had the kind of free choice. And answering emails is all kinds of horrible. I get so many emails I should have an A I answering those.


00;16;00;08 - 00;16;37;04

Jonas Andrulis

But one of the things about my current role that I fundamentally enjoy is that this gives me the opportunity to speak to some of the most inspiring and brilliant people all around the world. I was just this week in London on on Tuesday, and I met some absolutely brilliant people. And basically my my task now is as a CEO to build an organization that takes all of the super boring stuff off my plate and gives me more time to speak with brilliant people and kind of exchange ideas.


00;16;37;13 - 00;17;00;12

Michael Stothard

Fantastic. And so we've taken a circuitous route to get that because it's so what you're doing is so interesting and exciting, and I wanted to grill you on it, but we have got that. What is your crazy idea for the future? Your idea That sounds really weird and wacky now, but actually we can explain to our readers how it's just round the corner.


00;17;00;21 - 00;17;38;26

Jonas Andrulis

So I'm convinced that innovation and creativity will be very soon done by digital machines. The same way is currently done by biological machines. I love basically. Thanks. Yes. So basically, I mean, humans are just biological machines, right? And it's pretty obvious to me that there's no fundamental limit here. So we are imagining that new ideas and creativity and innovation is coming from like a special place, but basically just out of information theory, this does not make any sense.


00;17;38;26 - 00;18;01;27

Jonas Andrulis

Like where's the new information's supposed to come from? We're basically just biological machines that are kind of observing the world and being sad inputs from kind of all around us. But the same kind of inputs are available for digital machines as well. So I don't see any reason why there's a fundamental limit here or there's something that I will not very soon be able to do as well.


00;18;01;29 - 00;18;32;29

Michael Stothard

If you had to stack in some kind of order, the creative arts, the creative processes in terms of which ones get mastered first. So you know, from advertising copy, which is pretty much done now, that's fine too. You know, the Mona Lisa to, you know, Tolstoy, like how long before we get, you know, hit? I mean, we've already got a hit pop song, which was that.


00;18;32;29 - 00;18;46;16

Michael Stothard

DRAKE We can do fake collaboration. So maybe that's already been mastered. What what order does human creativity get taken over by the machines in terms of medium? Like, what are you excited about?


00;18;47;03 - 00;19;13;11

Jonas Andrulis

So I personally, I'm a big fan of music. I, I'm a very bad musician, but I love music, so I'm very excited about what's to come with music. We had we collaborated with an academic partner of ours and we had a artificial Bogner concert where we kind of had like a whole symphony on. We had the real Bogner and some artificial Bogner.


00;19;13;15 - 00;19;41;11

Jonas Andrulis

And this was extra funny because Berkner was known for being so full of himself, like he was absolutely convinced that nobody could match his genius, a very arrogant person from from what is basically known. And it was super funny that we were able to basically build an artificial Bogner and we were kind of even mixing the real one and the artificial one during the concert without most people noticing.


00;19;41;17 - 00;20;05;03

Jonas Andrulis

So that was quite fun. What I think when we look at those creative outputs like text, but also of course images is like a very big one, but also music of course. And what we what we need to remember is that the current modeling target for those systems is a little bit different from what an artist is doing.


00;20;05;19 - 00;20;34;10

Jonas Andrulis

And what I mean by that is that look at image models like stability as stable diffusion or quality or kind of mid journey model, and they're trained by conceptually understanding the ingredients of images and text image pairs and be able to remix them into new results. And this is in many cases very similar to what humans are doing as well.


00;20;34;10 - 00;21;03;23

Jonas Andrulis

I, I would ask you about humans that are creating and this is not just marketing copy, of course it's also marketing copy. But humans that are creating are in most cases remixing existing ideas. And there's nothing wrong with that at all. What I think makes art special is that art has always been something unique. So when you look at the history of art and I think it's this applies to all kinds of art, you will always see that.


00;21;04;10 - 00;21;38;03

Jonas Andrulis

What has been the new approaching? Art was always contributing something new, something meaningfully new. And it was not just a remix of existing ideas. And this is and this is basically what I think drives the human, the human love for art. Let me can I explain this briefly? So humans clearly have a world model. A We know we understand the structure and we understand the world around us.


00;21;38;20 - 00;22;29;22

Jonas Andrulis

And there's phenomenal research showing that what makes music enjoyable to humans is if it has some surprise. So if it basically builds up an expectation and then there's like a little bit of a surprise there, this is the same how humor works. Like humor has like a set up and then a punchline, which again works with surprise. So I think there's a human emotional connection to surprise and in all kinds of senses and this level of surprise when you apply this to to kind of images or music or text, this has a strong relationship to art because art and our love for art, from my perspective, is a way for for humans to update our


00;22;29;22 - 00;22;59;17

Jonas Andrulis

world model. We have this kind of built into incentive and emotional incentive to recognize whenever there is something surprising that's also meaningful because these are precisely the observations we need to update a wealth model. So everything that's surprising but also emotionally valuable, those are worth paying attention to. And this is precisely what created our love for our and our love for creation.


00;22;59;22 - 00;23;07;18

Jonas Andrulis

This kind of a drive to update and explore our world more. The is especially around the edges.


00;23;08;01 - 00;23;43;25

Michael Stothard

That's jaw dropping and very exciting. Can I ask you also ask you about the more banal world of like white collar work, which is not Mozart, which is not Tolstoy, which is not creating a surprising but meaningful update to our world model, but is but has an element of human creativity to it. What happens to the world, to our society when this when I is better at this stuff than we are?


00;23;44;09 - 00;24;17;15

Jonas Andrulis

Yeah, we're we're right in the middle of of this transformation. And this is, again, like not something fundamentally new, The value creation or the kind of distribution of value creation in our in our world has been shifting towards IP for like the last decades. Even right where we are, a big part of value is driven by technology and is no longer driven by manual labor.


00;24;17;23 - 00;24;50;00

Jonas Andrulis

And this even applies to like physical labor, of course, but also to intellectual labor. Many of the kind of jobs we have are even white collar jobs like and I would count myself my my own job here as well is like a lot of the things I do don't require like brilliance or innovation or like creativity is basically just getting stuff done and finding information, orchestrating, creating information, connecting information.


00;24;50;09 - 00;25;29;03

Jonas Andrulis

Many of those tasks will be transformed by A.I., will be transformed in a way where human machine collaboration will substantially change. And I think there's good things about it and there's bad things about it or risk about it. And the risks certainly are the speed of change. There's there's studies saying we're estimating that this industrial revolution is about two times or three times the speed of an industrial revolution of the past, like the steam engine, the into not the electricity.


00;25;29;03 - 00;26;15;04

Jonas Andrulis

And this makes it more difficult for humans to adapt and institutions to adopt. And we want to we want to protect our values. We want to protect our kind of democracy. We want to protect probably the kind of weakest parts of our society. We want to watch out for them. And I think this is difficult if the world is changing so fast and if a big part of that change is driven by kind of huge companies that basically can are so kind of far ahead of regulation that they can do things that kind of the regulatory bodies, the political parties, the kind of social and cultural institutions have not even noticed that they're going on.


00;26;15;13 - 00;26;30;21

Michael Stothard

Yeah, no, amazing. I just want to finish up with a few rapid fire questions just at the end, if you don't mind. So my first is and we asked this to all our guests, what is the best bit of advice you ever got?


00;26;31;01 - 00;27;00;29

Jonas Andrulis

So the the combination of best and and enjoy most joyful advice that I got was from a brilliant CEO that has a lot more experience than me. And we we sat we had we had had a dinner and he said, Jonas, it is the job of the CEO to do absolutely nothing. I think that that is brilliant advice because I cannot see this right now with all the work I'm being drowned under.


00;27;01;12 - 00;27;18;00

Jonas Andrulis

I sometimes risk missing the big picture. I kind of miss risk overlooking huge opportunities and great partnerships. So I think it should be the goal of the CEO to do nothing. But of course it will. It will never can fully come to that.


00;27;18;26 - 00;27;28;11

Michael Stothard

What are you great at outside of work? Do you have any hidden, hidden talents, great bassoon, ultramarathon runner, that kind of thing.


00;27;28;20 - 00;27;54;02

Jonas Andrulis

What's that concept outside of work and now? I mean, one of the things one of the things I love is music. I'm not great at it. Abide by it by any means. I'm. I'm a bad musician, but I love music. I love creating music. And I love also like combining music with with I. Many years ago I had an insulated version at a Valley stage where I combined computer vision with music production.


00;27;54;02 - 00;28;10;21

Jonas Andrulis

I produced some music and then I had, based on the people and movement and position in several rooms and that and that kind of art exhibition I had like the music changing around and all that. So I like to tinker with things like that, but I'm not good at it by any means.


00;28;10;28 - 00;28;14;06

Michael Stothard

Oh, I'll have to check that out. Does it still exist? Can I, can I listen to it?


00;28;14;12 - 00;28;20;22

Jonas Andrulis

And it's not online. There it was. It was not good enough to be kind of preserved for for the ages.


00;28;21;13 - 00;28;22;20

Michael Stothard

How did you earn your first dollar?


00;28;23;12 - 00;28;50;05

Jonas Andrulis

So I think the very first dollar with like pretty benign jobs. I was always jobbing at a carpenter when I was still pretty and kind of just to get my get my feet wet. But I started at the age of 16. I started working in software development. So the really like the first proper job I got that kind of earns serious money was software writing software.


00;28;50;05 - 00;28;54;26

Jonas Andrulis

So basically you, you can say that I only really know how to do one thing.


00;28;55;14 - 00;29;02;11

Michael Stothard

What was the one? If you had to distill what you've learned this year into one thing, what would it what would it be most important thing you've learned this year?


00;29;02;20 - 00;29;43;27

Jonas Andrulis

So I think this year what we're all learning is start the time is changing. The time itself is changing. And this is an interesting concept I've been working with when I kind of was implementing Fractal processes. There is a when you say fractal capital market theory and this is approaching risks in capital markets by not having time being a linear process on a fractured process where you have parts where regions or areas where time flows very slowly and then sometimes time is compressed.


00;29;44;21 - 00;30;09;25

Jonas Andrulis

So I think now it feels for probably everybody, but especially for for people in a I, it feels like time is incredibly compressed and it feels like everything we're doing has massive can have massive impact on like future outcomes for ourselves and the people we're working with. So I think this is what I'm surprised of a little bit.


00;30;09;25 - 00;30;16;28

Jonas Andrulis

That was time was able to be so much compressed as it is currently.


00;30;16;28 - 00;30;29;10

Michael Stothard

It certainly feels like that as a generative AI investor where, you know, by the time you've done your third call with the early stage start up, the start up has been completely over over, overwritten by events.


00;30;31;06 - 00;30;37;14

Jonas Andrulis

Are you are you, are you updating your thesis basically basically like how, how, how do you approach that?


00;30;37;27 - 00;30;56;03

Michael Stothard

And yeah, yeah. Every, every, every 5 minutes there's a new you know, there's a new thing that we have to that we have to think about. It makes it extremely challenging. But also, I think the most exciting time to be investing in in the long in the long time, maybe since the birth of the iPhone or some such.


00;30;56;13 - 00;31;02;26

Jonas Andrulis

Yeah. I mean, this is this is the time where empires are created or lost. I mean, this is kind of clearly feels like that.


00;31;03;01 - 00;31;08;06

Michael Stothard

Yeah. No, absolutely. Final question. What is going to be the hardest thing by next year?


00;31;08;20 - 00;31;33;01

Jonas Andrulis

I think prioritization, that's a boring answer, right? Because I think it is the hardest thing or the most crucial thing for every successful startup. But I believe that with the current opportunities and this is kind of includes everything, like we have our own research team, what should this research team be focused on? We are we are overrun by inbound by companies that want to work with us.


00;31;33;08 - 00;31;59;04

Jonas Andrulis

We have great partnership proposals and basically and we're just 50 like it's a super small team and figuring out how we can like the best gradient, how we can allocate our resources so that we survive. And we are still one of the best teams two years down the road. I think this is at least for me, probably the biggest challenge going forward.


00;31;59;10 - 00;32;14;23

Michael Stothard

Amazing. Okay. Well, thank you so much for your for your time. It's genuinely one of the most exciting conversations, one of the most exciting companies that's happening right now. So I really appreciate it. And good luck being in the eye of the storm.


00;32;15;08 - 00;32;16;28

Jonas Andrulis

Thanks a lot. I was that was great fun.