Player FMアプリでオフラインにしPlayer FMう!
Talking AI and Future of Work in XR — In a Truck — with Timoni West and Cole Crawford
Manage episode 353483870 series 2763175
This week’s episode goes all the way back to last year’s Curiosity Camp, when Alan shared a ride with Unity Lab’s Timoni West and Vapor IO CEO Cole Crawford, recording a podcast along the way. The three discuss the challenges that will arise as AI begins to replace human workers.
Alan: In a very special episode of the XR for Business Podcast, we’re driving in a car with Timoni West, head of XR… Research?
Timoni: Director of XR in Unity Labs.
Alan: Director of XR at Unity Labs, and Cole Crawford, CEO of Vapor IO. So we’re driving on our way up to Curiosity Camp through these beautiful winding roads, and we decided that we would record a podcast, because Cole, in his incredible company building the infrastructure of cloud computing, they built an AR app to help service that. And I thought, what a cool way to use this technology and this time on this beautiful drive. Wow. Look at the size of those trees.
Timoni: They are enormous.
Alan: Oh, my God. Wow. Well, anyway. Timoni, how are you doing?
Timoni: Excellently. And I’m also enjoying the view. Yeah. Yeah, actually, Cole, I’m really interested to hear more about why you chose to go with that, and what the process was like. My team is working on tools for mixed reality. So for Unity itself, that’s used to make, I think, 90 percent of all Hololens applications right now. Century is using Unity for that. But the tools that we’re making today are allowing, I think, for you to more easily make robust, distributed applications that can work across various devices and for various users.
Cole: And that’s very needed. First off, Alan, I just want to say, you sound like you should be a podcast DJ.
Timoni: So it’s cool that you are.
Cole: Well done. But yeah, I mean, the issue for us when we started down this journey was very much a question of, how robust can we make an experience, about how widely could we make that experience? And the vertical integrated solutions that you had to choose from in the early days of AR/VR, I think, are primed for disruption. I’m super glad to hear that Unity is working on the open APIs, etc., needed to bring this technology to more users, as I’ll quote — maybe a little cliché being where we are and where we’re going — but–
Timoni: Yeah, I want to hear it. What is the problem you company solves?
Cole: Yeah. So we have to think about not four, but 40,000 different data centers; we’re an edge computing/edge data center infrastructure company. And with that means you can’t Mechanical Turk what was originally done in data centers. It works with four buildings. It doesn’t work with 40,000. So we had to build autonomy into every aspect of our business, in every aspect of the infrastructure. And that means building really simple interfaces for what would otherwise be really complex problems. And at scale, from a logistics supply chain — remote hand, smart hands, all the things that you do in data centers — what that means is your FedEx guy, your U.P.S. guy, a contracting company that otherwise would need specialized training, now it’s visually assisted capabilities for what would otherwise be a job that you would train for and then go work in a data center. We simplify that.
Alan: So basically what you’re saying is that you’ve given real-time tools to anybody to be an expert on the field, in the field.
Cole: It’s fair to say that the software is the exper, and what you need are opposable thumbs,.
Alan: Haha! Which democratizes the whole need for training.
Timoni: You know, it’s funny; I was just getting drinks with someone from Open AI. He is working on the robotic hands with opposable thumbs. I wonder whether or not that’s really necessary. It is a tremendous challenge. Okay, so from what I got from our earlier conversation, someone goes to a data warehouse, they’re looking at a… “RU,” you were saying?
Cole: A rack unit.
Timoni: A rack unit. Yeah, right. Yeah. And they get some information that comes up that says if it’s broken, if the client wants it serviced or just repaired or replaced entirely. So anyone can have the Hololens on and, using an image marker, know what is contextually needed for this particular server RAC. An advantage using augmented reality for this versus just having a bunch of displays is that the monitors don’t break; if a Hololens doesn’t work, you get another one. That’s awesome.Is there any other use cases that you’re using augmented reality for? Or virtual reality? I like seeing the warehouse at scale, etc.
Cole: Absolutely. Yes. And some of the work that you guys are doing I think is incredible. If the Hololens breaks or if a Magic Leap breaks or whatever the hardware happens to be, to go back to that cliché quote, Mark Andreesen said, “software is eating the world. If something fails in hardware, you should be able to take out your phone and have that same experience.”
Timoni: Exactly.
Alan: I think that’s the real key to scale.
Cole: It has to be, right? It has to be kind of a “bring your own device.” You have to get to that point.
Alan: Well, even the new glasses. So Nreal launched their glasses this week in AWE and their glasses plug in USB C into your phone. It’s using the processing power of your phone to give you really, really good heads up AR, and it’s positional tracking and everything, just kind of very lightweight pair glasses for $500 bucks.
Timoni: And the image quality was really great. I mean, the field of view, obviously it’s constrained to the glasses. But it fits so nicely in the frame, I was super impressed when I tried it out.
Alan: They’re very lightweight, comfortable.
Cole: And this is what’s amazing about Silicon Valley today. I mean, I’m just reminded of where we are, and I just wish you guys can see this–.
Alan: You know what? I’ll take a video of us driving up and we’ll put it in as a gif.
Timoni: It’s just like Lord of the Rings.
Alan: We’re driving with thousand-year-old redwood trees going up a giant mountain in the windiest road you’ve ever seen.
Timoni: Talking about AR.
Alan: In an enormous tank of a truck.
Cole: It’s true. But the pace of innovation, if you think back — Alan, you and I were chatting earlier about your first experience of VR — and I just–
Alan: Actually I got to say this; my first experience in VR was at Curiosity Camp five years ago, and we’re on our way to Curiosity Camp now.
Timoni: Oh, that’s amazing.
Alan: And then Chris Milk put VR on my head and I had this kind of “aha, come to Jesus” moment when I was like, “this is the future of human communications.”
Cole: And isn’t that what got you into tech?
Alan: Yeah.
Timoni: That’s… Wow. That is so cool.
Alan: Well, I was in tech before, but I made DJ stuff. So yeah, a little bit different.
Timoni: It’s all coming together now.
Cole: It totally is. Convergence, you know?
Alan: You were to talk about kind of the speed of acceleration of technology? And I think people neglect– because maybe they’re working in IoT, or are working in cloud computing, or they’re working in 5G. But if you take a really 10,000-foot view, they’ll all work together. And the fact that they all work together and they’re all in their infancy now, but all maturing at the exact same time. You have IoT, 5G, quantum computing, cloud edge computing, you’ve got blockchain, VR, AR, all at the same time.
Timoni: Also, I truly believe that — and this has been coming up a little bit more slowly than perhaps that you just described — but the moment we started really having sensors on computers and being able to make sense of that data, apply semantic analysis to it — that is another turning point in computing. That’s the equivalent of going from the mainframe that’s the size of a room to–
Cole: I got chills, that you actually said that. Yeah. It’s great that you did.
Timoni: Yeah. That is… wow, you’ve really got goose bumps… this is the next great thing, having all this world information and then having computers able to understand what’s going on.
Cole: It’s a hundred percent right. Lord Kelvin — just a little history — Lord Kelvin, if you know the Kelvin scale.
Timoni: Yeah.
Cole: He said to measure is to know; if you can’t measure it, you can’t know it.
Timoni: I love that.
Cole: It’s a really cool quote. But think about what we could do and what we have access to. Alan, you mentioned 5G. What we have access to in 5G is a network that is as real as the fiber optics in the ground. With speeds that are the same. So from a latency perspective, human eye can see 150 points vertically and 180 points horizontally. And at every point there is a level — you can see about 200 points of data — it’s chemical.
Timoni: And there’s different resolutions.
Cole: And different resolutions. But you take some mild compression associated with that to deliver a 4K experience to each eye.
Alan: And then foveated rendering.
Cole: Its refresh rate. You’re talking about 10 gigabits of data per second per eye.
Alan: And then that’s not including what you’re collecting from the sensors from the outside world to make it all synchronized.
Cole: That’s 100 percent accurate. So I mean, think about what you can do with the contextualised data with the real world.
Alan: That’s all I think about.
Cole: Yeah, it’s incredible, the capabilities we’re going to have over the next five years as these new networks come out.
Alan: It’s super human.
Cole: It’s going to change. It’s going to change the way humanity interacts with each other.
Timoni: Yeah.
Alan: I can’t wait till we go to a networking thing and everybody’s — it’s facial recognition — and it puts up their their names, and who they are, in front of them. Because I don’t remember anybody’s bloody name. That should be the new name tag. You wear these glasses in, everybody’s face pops up.
Timoni: I just said this earlier, but I’ll say it again for the record for the podcast: If you meet me and we’ve met before, and I don’t remember your name or your face, I’m so sorry. As soon as you start talking about what you worked on, I swear to God I’ll remember that part, because that’s always the coolest part; hearing about all the cool shit everyone is doing. I also love that my group has something — it’s a little clique-ish-sounding — we say like, “oh yeah, that person, they kind of get it,” right? What I mean by “getting it” is that we have a shared similar vision of spatial computing outside the bounds of not just talking about augmented reality or virtual reality. Those are components, those areR displays in this larger ecosystem of network computers. They run on edge that are all consistently talking to each other, and had this world information. To me, I don’t want a computer to be a single contained piece of hardware anymore, nor is it, really. Every device I have is networked. But I want to live in a world where computers sort of surround me in the most intelligent and privacy-sensitive way. But really, just sort of customizable to the point where I can wake up in the morning and have computers help me along my day in the way I want them to, as opposed to having a phone, then I have to pick up the haptic glass, or not having my Sonos talk to my shoe lights or whatever. I really want the whole thing to be the computer.
Alan: It’s interesting. I wrote an article recently on BCI and AI coming together as a bi-directional brain computer interface. So, being able to insert a chip into a brain so that you can hijack all the senses. I talked with this example of your walking down the street, and you start smelling cookies and coffee and it gets stronger, stronger because you’re getting close to a Starbucks.
Timoni: Or it hides it so I don’t have to eat the cookies.
Alan: Exactly.
Timoni: Let me give you an example. I’m not talking so much about… like, BCI is so cool, but do you really want everything?
Alan: I don’t know.
Timoni: I’m having trouble with that. But I gave this example in my talk yesterday, and I talk about all the time; when I wake up in the morning, I want to have little snippets of display UI that are kind of scattered around my home. And it could be a projector, could be glasses, or it could be a branch of them or whatever, that are all just little subsets of a larger computer session that is happening in the cloud. And I’ve customized and put these things where I want them. Oftentimes there’s no visuals. Maybe I’m just talking to the computer that is in the house. Maybe I’ll have cameras all around the house. Oh, side note; some people will say things, like, “is it really going to put a bunch of computers and sensors in your house?” A hundred years ago, nobody had electricity, and we either retrofitted or were willing to take out pipes and put electricity in all of our homes.
Alan: I think everyone is going to have a 5G repeater in their house.
Timoni: We’ll build infrastructure as long as there is enough value to us. And as long as we trust it enough that we’re fine with it. And I think that’s really going to happen. I really just want computers to be distributed little snippets of things, like a great Internet of Things combined with the best types of displays you could possibly have in the moment.
Cole: As you say in your side note, you have to make the capital work. I’m reminded of the autonomous drones, autonomous cars, and the dollars that go into putting everything on board. And the way I see this at city scale is that, from a Silicon perspective, why are we putting $100 computers on 1996 sensors?
Alan: It makes no sense.
Cole: It makes no sense.
Timoni: Yeah.
Alan: It’s because we don’t have 5G yet.
Cole: Exactly correct. Exactly right.
Alan: But here’s the thing. There’s so many great use cases for 5G and headsets. And up until… the problem is nobody’s going to have a headset for another five years. Maybe more. It won’t be a thing that everybody has and it won’t be really good enough — in my opinion — for consumer scale. So for now, it’s enterprise. But what can we do with the phones that still give people superpowers? And I think that’s a really practical thought experiment where you have a device in everybody’s pocket. The 5G ones are gonna be epic. And what can we do with that that we can’t do with what we have now?
Timoni: I really don’t want to be running most of my compute power on local devices. Well, Edge, sure, fine; but I don’t want to have an individual application that I must add onto my phone, or augmented reality just simply won’t work. But I don’t want it anyway. Like Google Studio, for example, of being able to run enough frames in the cloud.
Alan: That’s pretty cool.
Timoni: Have them go down to the device. That’s more effectively what I want. And the advantage is that if we all have these compute sessions running in the cloud or concurrently running apps, if I want to send you something, your app’s open, and my app is open, it’s just a better way to compute.
Cole: One hundred percent. And I don’t know, you guys might be shocked to learn this, but I’m old guard telco. And the reality is–
Timoni: By the way, he looks very young. This is a very strange statement.
Cole: Very. Yeah, this is like the late 90s. But we’ve built this thing called the Internet backbone, which we all know on the wireline side. But what a lot of people don’t know is that on the wireless side, we built — fundamentally — four different networks. We built the modern networks as we know them today with AT&T, T-Mobile, Sprint, Verizon, etc. They all built their 3G networks and their 4G networks, etc., and we’re all plugging into this big Internet backbone because that’s what you do. If we’re on our phones and are watching YouTube, we’re going to Amazon or Netflix, — whatever it is — we’re back on the wireline side. We have to get from the phone and the modem and the phone to some radio substation that’s mounted on a macro cell tower site. Take fiber optic cable back to a data center somewhere. And it is so not optimized. I mean, you might be shocked to hear that it is not.
Timoni: I always imagine the building that Netflix had to build in New York. They had, like, Radiolab podcasts about it or something.
Cole: Yeah, yeah. You don’t get better at delivering these experiences by algorithms, because you’re not going to algorithmically speed up the speed of light. This is really where edge comes in. Right? If latency is a function of proximity, all of a sudden, you need to move the compute as close to the device or the user as possible.
Alan: But when it’s not…
Cole: [crosstalk] I’m all for it.
Alan: I don’t think people are ready for this, because once we figure out edge computing at scale with all of these other technologies at the same time? I don’t even know.
Timoni: What are you worried about?
Alan: So here’s what I’m worried about, and it keeps me up at night: AI is going to really quickly replace large swaths of jobs, mainly accounting jobs and data sciences, and huge office towers full of people that are literally doing spreadsheets are gonna be wiped out because they don’t need that anymore. Lawyers — entire swaths of lawyers — are gonna be gone (which I don’t think any of us are really going to complain about). It’ll be the law firms with the best AI that will win. And that’s a whole job category gone. And it’s not that there won’t be other jobs, there will be a transition. But the transition is going to get quicker and quicker and quicker. And I don’t think we have the infrastructure education-wise to retrain and reskill people fast enough, which is why VR an AR is needed for education and training. Your use case is literally paving the way. That’s why we want to do this as a podcast, because you literally have built something that will be a case study for years to come.
Cole: Look, we hope so. But I think to your point — and you could do a whole separate podcast; in fact, we should do a whole separate podcast on the political implications to this. Part II! — do we tax computers? If narrow AI is replacing sort of these first-level jobs in these companies, do we tax them?
Timoni: Okay. So if in the future, we have AI that does genuinely replace human workers — and I have some reservations about whether or not that will happen, that have nothing to do with the tech and everything to do with socialization — I think that if we tax the AIs, does that lead to the window tax problem? In the 16th century, “let’s tax people by the number of windows they had,” and everyone then bricked up their windows to avoid taxes.
Alan: I think we just have to have a restructure of the tax situation so that corporations pay higher tax than individuals, because what’s going to happen is individuals are going to have fewer and fewer long-term jobs and will be more the gig economy, which if we can fundamentally teach the younger generations how to use the tax loopholes by incorporating themselves and using the tax loopholes, then you’ve actually kind of artificially changed the tax structure around; because right now the tax structure is based on taxing the individual at the highest rate, and corporations get all the breaks. Well, if individuals start acting as corporations, then you get the breaks and the government will take a while to catch on… wow, that’s beautiful. We should stop there and take a photo.
Cole: Do you want to?
Alan: I think we should.
[Intermission]
Timoni: So one thing I point out is that through most of human history, we did not have careers.We did not have salaried jobs; only the landed gentry could be assured of this consistent sort of income. And while there were other social structures in place to make sure that most people reasonably knew where they were going to be able to eat or, you know, it was more that the vagaries were natural, not social. This whole return to the gig economy means we were only changing the social structures from the last hundred years. So while people might think of it as this long-term system that is fundamentally changing how humans are. If anything, the last century was the blip and this is a return to the norm.
Alan: I think we can fix making women who just had a baby work after they’ve gone… It’s crazy. We should be celebrating that and making sure the parents have as much time with their children as possible because that’s what makes the whole thing better. Not making people work 80 hours a week.
Cole: A hundred percent. If the computers, if you can tax them, and maybe.
Timoni: Lessen the human burden, is what you’re thinking?
Cole: Well, look, if there’s more time for us to do the things that we care about — not saying that we shouldn’t care about our jobs — but some of the things narrow AI can accomplish alleviate some of the pressures on how we would train, how you would optimize for that function to be done by human. Does it not make sense if we’re taxing the computer that we create some universal basic income?
Alan: One of my friends, Floyd, is a huge proponent of basic income and it’s something we have to. I think nobody’s going to go for it. First of all, in America, it’s not going to happen. But what we can do is we can change the tax laws to tax the corporations to basically redistribute and give services to people and make it so… because tax was there to serve the people, not greedy corporations, that only government… something got off track in the world. It’s not just America, but the world. I think too many people let bankers get away with stuff.
Timoni: I think it’s gonna settle down. The modern economic structure is not even 500 years old, not even. I think we’re at a weird sort of inflection point and people will start over the next few generations. I always like to think very long term. Over the next 60 to 100 years, I think we’ll start to calm down again. I just… dystopias rarely happen, and they happen in blips and they don’t usually last that long. I don’t know. And just one more thing. If it turns out that we actually in the end should have a series of large scale-companies running the world, that might not be terrible as long as the companies are set up correctly.
Alan: I agree with that.
Timoni: A lot of people are going to be like, “no, corporations should not be in charge of anything!”
Alan: Here’s my challenge to that: What is the one measurement of the success of a corporation that the world uses as a standard?
Timoni: Today, you mean?
Alan: Yes, right now.
Timoni: Shareholder value.
Alan: Economics. Yes. How much money? That’s it.
Timoni: And that has always been–
Alan: It’s artificial shareholder value. Like, we drive the share price of companies up based on nothing and drive them down based on nothing.
Cole: Not “nothing.” So obviously there’s a microcosm that exists in Silicon Valley, but for a publicly-traded company, its one of two things. Either a dividend payout, or you’re paying off your debt and growth. That’s what it is.
Timoni: But that has been the case for any social structure humans care to name throughout all of human history. You either grow or you’re stagnant and you don’t grow. Corporations just happened to be like a very close, tight-loop version of any society. Could have been a kingdom, could have been a tribe. What have you. Humans: we’re consistent.
Alan: We are that.
Cole: I feel like the hardware is more consistent. The software… I often think of us as 1st-century hardware running on 21st-century software.
Timoni: Yes, totally. Totally.
[Intermission]
Alan: So we just got back in the car, we stopped on the side of a mountain to take some beautiful photographs and we’re back. I don’t know where we left off, but let’s think about this. We were talking about the future of work, how technology is going to fundamentally change, how we train and educate people. But also…
Cole: We were talking about what’s the role of… maybe this is a question I’ll ask you guys. What is the role of the first-level citizen when narrow AI is starting to–
Timoni: Actually come into play in the factory, and becomes a worker?
Cole: Yes. Yeah. Actually becomes that worker.
Alan: The thing is, it’s not going to be overnight that it becomes the worker. What it’s going to do is slowly, one by one, take large swaths of tasks.
Timoni: So… and two things here. First, AI is a [expletive] misnomer. People are like, “cool, an artificial intelligence.” Nope, this is a heavily-curated and single-purpose, like, basically extremely good algorithm that is designed to do like one single thing. And it might be like, find cats with whiskers versus don’t find cats with whiskers. Right?
Alan: But it could also procedurally generate digital humans.
Timoni: That is not going to happen for very long time.
Alan: Or buildings, which I’ve seen.
Timoni: Oh, you mean like an AI that makes the procedurally-generated buildings?
Alan: Exactly.
Timoni: Sure.
Alan: So the content for something that would take a content artist — a 3D rendering artist — maybe months to build a scene, now they’re procedurally generating this stuff in seconds. So that’s interesting.
Timoni: So you touched on this a little bit earlier. I mentioned this very briefly in our last round of podcasts. But there’s a social component, right? So I have had the good fortune to meet a lot of people who make top-tier, triple-A content for movies and for games. These are the people who will obsessively work, like, thousands and thousands of hours to bring you the final scene in Avengers. And even if they have procedurally generated content, the reality is they always feel like they can just finesse the shit out of it, definitely. And what I see is the creators getting more and more picky. I always go to the post-production talks at SIGGRAPH and they’ll talk about how, like, in Blade Runner, how long it took them to get an artificial human looking good — the Rachel character — and how they had to argue with the director because he wanted the scene in Vegas to have no blues in it whatsoever. And they ended up creating this new type of filter for the camera that had no blues; the director saw it and he was like, “no, I still see some blue,” and had to literally prove that there were no blues in the scene. Now, this is on top of all of the best tech; like, these are the highest-end effects houses in the world. These are the ones who are really pushing the limits and they’re working together. It’s WETA, it’s Rodeo. It’s all of the other greats. And so what do they do when they have these tools and these great machine learning algorithms, they get more and more picky and so on.
In Infinity War, they literally had, I want to say like five petaflops of data, because they scanned every single character cropped in-scene in that whole movie so could have it in post. And OK, so sure, maybe at some point these will actually be replaced by AIs. But I feel like humans inherently don’t trust the machine enough, or we just want to get our hands dirty. And with lawyers, too. Sure, you have an AI that can make a better decision. But the reality is humans do not make decisions based on data. We use data to justify our predetermined decision processes.
Yes, but if we did use data, we’d be actually way more effective. Honey, I know. So that’s the thing. If if a law firm figures this out and says, hey, wait a second, this AI is outperforming our lawyers 10 to 1 because it will on everything, because IBM Watson can read 5 million case files in an afternoon, making the lawyer read five maybe. But what are you trying to do? You’re trying to set a precedent, be looking for precedents. Maybe you’re looking for. You could scan the entire country’s records, looking for precedents in seconds. But who created a.
Cole: So I always come back to the the whole, “do computers dream of electric sheep” and the morality. Look at the divisiveness going on in social media politics right now. It’s been determined that the coders that write this stuff up, they have cognitive bias and they write that into their code. And the code gets trained and it becomes biased.
Timoni: And that’s how you end up with a hella racist [AI].
Cole: [Laughs] that’s exactly correct!
Timoni: But here’s a cool thing. I actually love this about the AIs; when we are concerned about bias and data that we need to de-bias the data — and honestly, we’ve only begun work on what that even means — what does it mean to be biased? What is a cultural bias versus a universal human value that needs to be removed or cleaned up or gotten rid of? Then also it forces us to think about ourselves. And I really love that if you meet a machine learning algorithm that happens to be biased, that teaches you something about yourself. Right? In the cold light of day, if something was you and did the same thing five billion times until it trained up to, like, the essence of you? I think that’s a great learning tool. No one’s gonna think about it like that. And now it has. They just sort of talk about it and reactionary way. But there’s some real value to that. I actually love the idea of having this sort of listener that I can talk to that helps me work through my biases, because it can see where any individual’s action I take or statement I make, like, where that can go, taking to its logical extreme.
Cole: Yeah, I don’t know if humans are ready yet for it for the self-reflection that would take to actually get over it. I think we live in a world of naive realism.
Timoni: You know, I love that. Yeah. I think you’re right. Well, it’s true. For example, I’m a designer by trade. Did UX for years (we called it information architecture before). I studied literature in college. And we always viewed literature through what we called it at the time, “different critical lenses,” which I now know are just different mental models to a different context, or at least that’s how I would describe it. And there are a lot of people online. There are entire communities around rationalism and mental models and really trying to get to the right decision. To your point. Like we don’t care about being… let’s see, what’s a good way to put it?
Alan: We don’t need to feed people’s egos. We just need the right answer.
Timoni: Exactly. It’s what is known about being right. It is about seeking the truth and finding the truth. And this goes all the way back to the pragmatism of William James, early 20th century. Well, he probably even before that. I don’t know. I do feel like we’re… maybe it’s just because I live in my tiny little rationalist bubble, but I do see more and more people talking about this stuff and interested in this stuff. And I can’t help but think inherently most people would rather be right than… I don’t know. You’re laughing.
Cole: No, I think you’re optimistic. But I think you’re right. I just think it’s going to take some time.
Alan: I think this road has gotten sketchier.
Cole: Yeah, it’s gotten narrower, for sure.
Alan: The road went from two lanes in a winding road to one lane.
Timoni: It’s very s-curvy right now.
Alan: It’s pretty beautiful.
Timoni: And it’s right dappled, like the sunlight as we as we go further into the woods.
Cole: Right. We’re going further down the rabbit hole.
Alan: Yeah. This is great.
Timoni: So getting back to machine learning and artificial intelligence, I do think, as I mentioned earlier, I really want to see people starting from where they want to end and start with what their vision is. And then we can work backwards from there to figure out what could potentially go wrong. What I see instead is people sort of being alarmist about what could possibly go wrong with no real end in sight. And that just always ends in this kind of dystopian picture of, well, imagine a world where people know your every thought, emotion, and hope, and therefore can constantly feed this to you.
Alan: I’ve got news for everybody listening. Guess what other people are thinking about you: Nothing. They’re thinking about themselves, really.
Timoni: I mean, it’s true. Marketing is designed to manipulate you. But there’s given a population, too. Right?
Alan: “Make better health decisions. Exercise. Think positive things.”
Sure. But if I’m at Disneyland, which is a gigantic mega corporation and I’m waiting in line for Indiana Jones, the line is designed to make me feel like it’s taking less time than it is. That is a great example of good environment design that does, in fact, manipulate you. And yet, it’s to everyone’s best
[interest]
. You can’t make the lines shorter. Right? So why not make it more pleasant along the way?
Alan: It’s a great analogy. OK, so let’s go deep down, since we’re going down the rabbit hole. Timoni, what is your vision for the future?
Timoni: So I would like to see a world in which everyone is able to use computers to the best of their abilities, imagination, and intelligence. Right now, people are using computers all the time. We talk to computers more than we talk to any individual human throughout the day. And yet we have this sort of siloed set of experiences where people can do a task per application. Part of this is just due the nature. Then I have a niece, for example, who is on TikTok probably six hours a day. But if she wanted to describe and illustrate and animate a dream that she had last night, she would have no ability to do this. I think there are several different ways that we can attack the problem. First is to make creation tools that are easier to use, which I think we’re continuing to evolve, and AI can actually help with that with procedurally-generated things. But I also think that we just need computers to be able to listen and react to humans specifically. We do not have operating systems right now that can listen for what I call “the no.” If a computer does something I don’t like, if an application does something I don’t like or don’t want it to do, there is no “no.” We see this slightly with notifications where it’s like, oh, do you want less notifications or to turn them off? We’ve had computers for 60 [expletive] years and that’s as far as we’ve gotten?
Alan: Turn off all your notifications. It’s actually liberating.
Cole: Yes.
Alan: Turn them all off. You’re going to check the apps anyway.
Timoni: I do the same thing.
Cole: How about a better life hack: leave your phone at home a couple of times a week.
Timoni: Oh, interesting.
Alan: Wow.
Timoni: You do that?
Cole: Yeah.
Alan: Do you really?
Cole: Yeah. Yeah. I mean, yeah.
Timoni: Are you carrying an iPad? Are you cheating here?
Cole: No it’s so good for your mental health. I mean, I’ve not had cell phone service on my phone for 45 minutes now and I’ve not had to look at it. So no, it’s not there. Or if you can’t leave it at home while you go take a hike in the woods with no spectrum, go to dinner with friends. Hang out with real people. Put your phone underneath the salt shaker and whoever picks up the phone first–
Alan: Pays the bill.
Cole: Pays the bill! Put it down.
Alan: Yeah.
Timoni: That can lead to some lengthy small talk after dinner.
Alan: “We’re not leaving, but the bills getting higher.”
Timoni: That’s cool. But any case, getting back to a vision for the future. I want to have my computing session in the cloud. I want apps to work interoperably. I want to have a series of continuous… like, in my house. I have pasted a little digital interface here, a little digital interface there, I put up a little inventory. And it’s got all the things that I want. And I can combine them in any given way to do whatever it is I want to do on a computer. Actually, a RISD… I think MFA student recently posted a concept OS that he made called Mercury. I highly encourage you all to take a look at it. It just came out, I think on the 28th. If you just do a Google search for Mercury OS, it’ll come up. And he had rethought the concept of an operating system as a series of stated tasks — like “I want to check my email” — and then everything that you would need in order to effectively check your email, which does not just include your inbox, comes into a set of containers and this is called a flow state. So you’ve got your inbox, but then maybe you’ve also got your calendar open, and your map open, and your to-do list–.
Alan: Ahh, the tabs that you need for that.
Right. And you can drag and drop in between all of these different what we now think of as applications. But if you remove the data layer itself from the container, from the visualization, then you have a really robust way to interact with the computer and digital objects, and in a way that makes sense for you at the time, in the mode that you’re currently in. The cool thing about this for augmented and mixed reality is that it makes no difference if you’re doing this on a 2D screen display or if you’re doing this in a headset that is also showing your 2D screen, or a 3D screen, or a 3D object if that makes more sense, depending on what you’re doing now. This is really essential for augmented reality that we start to remove the data layer from the container layer, because if I am in augmented reality, if I’m looking through my Hololens and I have two apps open with two cubes that look identical, one in each application, I cannot — and I can’t expect any user — to context switch between what one app does and the other app does. Like, if one of them is a modelling app and the other one is an interactivity app and I’m dealing with the same cube, it needs to be the same cube in the same place. So what I’m talking about is, like, far afield; 50 or 100 years. We’re going to have to rethink computers, but we’d start talking about it now. We’ve been talking about it since the early 2000s. Let’s just continue to push this idea forward.
Cole: Quantum, baby. It was only a matter of time before quantum computers came into this podcast.
Timoni: Ok, so let’s talk quantum.
Alan: Let’s talk quantum.
Timoni: Should we pause and quantum later?
Cole: I think we should quantum later.
Alan: So, part 3 of this podcast is going to be quantum later, around the campfire.
[Intermission]
Cole: …government was any block as part of a blockchain. Now all of a sudden, you are in control. You have an immutable record of truth.
Alan: And you can cancel it.
Cole: Exactly. You can expose…
Timoni: So for context, we’re talking about why is it bad to have your data collected? Why do we keep hearing people say, “oh, but what if the insurance agents know that I have cancer before I do and then [expletive] me over?” The answer to this question as they shouldn’t be able to [expletive] you over. Right?
Alan: That’s the simple answer.
Timoni: In the world that we want to live in, it should be great that everyone knows you have early stage cancer so you can fix it. So now Cole’s talking about this concept to the smart citizen who has ultimate control over their data.
Cole: Yeah. And I just think it’s going to take… so A) It goes back to what you talking about before, which is how do we get corporations to realize that the data they collect is for the benefit of all mankind? And that’s hard to get them over because right now the dividend or the capital power or the debt pay off, etc. or the growth is what these guys get rewarded on today. So I think it takes a–
Alan: What if — ready? What if we had a new system and it was an education-based system that, instead of charging people to learn, you got paid to learn? Every time you learn something–
Timoni: What is this, Scandinavia?
Yeah, well, I come from Canada — we have socialized everything, c’mon. But we paid you to learn. And so, little micro currency; five minutes of reading gets you five points or whatever on you on your blockchain ledger. Right? But what if that same system also took care of your health care, your insurance, your banking needs, everything that you need. Kind of like WeChat, but instead of one corporation owning it, as you progressed in the education component and you graduate, you become an equity shareholder in the company? And the company that has paid you to learn the whole time now is selling you all the services you need, but you own that company that’s selling you the services? So you’ve basically created like a shareholding system, but nobody can own more than anybody else. Everybody’s equal in it, and it automatically waterfall distributes the profits.
Timoni: What the profits are..?
Alan: The profits would come from a number of different ways. So the participants in the program are being educated and trained in mindset and maybe it’s a percentage of their income in perpetuity or something like that. But they always own this company. I don’t know what that looks like in the long-term, but the company itself can make products that are sold outside of the network. We can make products like a health care product; if you make the best health care insurance in the world for your members, other people outside are going to want it as well. And you can give it, make it available to other people, make it a huge profit center. They didn’t grow up in the system. They can only access certain services that are profitable to the system and the people. You have to go through the system to be part of it. You can’t after 18; you’re not allowed in. Like, you have to go through the system, graduate from it. Now you’re in and you’re in it for life.
Timoni: Sounds a little caste-y. I don’t know.
Alan: I’ve been working on different ways to solve this at scale around the globe. So then now you have global citizens all interconnected with the only purpose of helping each other.
Timoni: So one thing that has always puzzled me is why doesn’t socialism work unless it’s kind of sneaked in? Like, socialism is a great idea, actually, I think as a concept — like, on paper. Absolutely, this is sort of a tribal communal way humans kind of inherently want to work and think anyway. And yet at scale, when people claim they’re going to start socialism or a communist country, it always ends up being a cult of personality. Right? It always ends up actually being a fascist society instead. And yet they call themselves that.
Alan: Because it’s usually by some egotistical leader.
Timoni: But why do you think that is?
Alan: Well, why do you think you have the president you have here? The public can be easily swayed by showmanship and flashy shit.
Cole: Is it fair to say that power corrupts and absolute power corrupts? Absolutely.
Alan: I see what you did there.
Timoni: But why can’t we just do socialism from the get-go as a stated goal? Sans cult of personality.
Alan: This is what I’m proposing; a new social exchange where everybody benefits from going into the experience economy. So we’re not going to buy cars. We’re really not going to need to buy houses. We can live anywhere. Imagine, you don’t need to own anything, but you need to have access and experience everything. And so what if part of the program was experiences? And as you educate yourself more and became more valuable to the organization, you got invited to more and more experiences?
Cole: Yeah, I think this has to start. So today we talk about the… I think it’s up to 75 billion connected devices by 2025? Ridiculous.
Alan: 44 zettabytes by 2020.
Cole: Correct. And 120 zettabytes by 2025.
Timoni: I didn’t even know what a zettabyte was.
Cole: A zettabyte is a thousand terabytes.
Alan: Thank God we have a data scientist.
Cole: Beyond the Internet of Things, I think as you pointed out, the experience. So today we have what’s called the knowledge economy. I think after IoT, after the Internet of Things, we start thinking about the Internet of Skills. And with that, with the Internet of Skills, I think you’re going to finally get to a place where, end-to-end, you could build the proper incentives for contextualizing what’s good for a corporation in context of what’s good for a human.
Alan: And what if the only goal of the corporation was the well-being of the students and of the members and owners of the corporation?
Timoni: Have you read The Diamond Age?
Cole: Yes.
Timoni: The Diamond Age has corporations, as sort of citizen state economies where — with class systems.
Alan: I’ve got to read that.
Cole: Yeah, it’s good. That’s funny; I’m reminded of a company… do you guys remember, back in the early, early, early 2000s, around Napster and what was happening? It had prompted me to think about a company that it would — it would be illegal. — put your hat on for like 20 years ago and this is controversial. But could you build a company where if you were a monthly subscriber where that came in the form of some sort of stock in the company, you are a shareholder and you can provide that platform to your shareholders. Back then, could you do some kind of peer-to-peer network where, as a shareholder, you’re entitled to the content that sits on that network?
Timoni: Like Usenet?
Cole: A little bit. A little bit. So. But it takes… I mean, good luck. Have you ever read Flash Boys?
Timoni: Oh, I’ve heard of it.
Cole: Fantastic Michael Lewis book. And he actually has a list of podcasts. And he did a recent podcast, fairly recent, called The Magic Shoe Box. Really interesting. And it’s all about high-frequency trading and the latency associated with high-frequency trading. And a general by the name of Ronan Ryan, who who went and created entirely new stock exchange just to create fairness in the stock exchange. So coiled up miles and miles of fiber, got rid of the HFT guy. So it took no skill at all. The idea of stock exchanges, where you are informed enough about the mission of a company that you feel like you want to invest in what that company is doing and where it’s going, high-frequency trading came about. And a lot of these companies, just because they were one or two milliseconds faster, they just saw big buys happening so they could get in front of that. They could buy it, then sell it to the actual purchaser. Billions of dollars.
Alan: I think the major problems in the world can be solved by putting a leash on bankers, because they tend to make the rules in their own favor. And that’s how you ended up with–
Cole: Money makes the world go around.
Alan: You ended up with a trillion dollar bailout or other.
Cole: That’s right.
Alan: And here’s the thing. Many people don’t realize this right now in America. They don’t realize that the problem in 2008 with the subprime mortgages, it’s being done all over again with retail properties. Some of these big malls that are now empty because the big players have pulled out, the malls are dead. They’re all still being treated as if they were full of triple-A real estate rates.
Cole: So you know, the guy that ran that hedge fund–
Alan: Yeah.
Cole: –is that Curiosity Camp, today. Yeah.
Alan: Wow.
Timoni: All right. Well, we’ll talk to him.
Alan: Let’s have a little conversation about this.
Cole: He was the guy that went to went to Goldman Sachs and said, “hey, this is what we want to do and we can short all of this stuff.”
Alan: Well, he saw an opportunity.
Cole: If you guys are out for it, whether it’s at Curiosity Camp next year or else, I’m down to make this an annual special edition podcast.
Alan: I love it.
Timoni: This is awesome.
Cole: You were talking about, you can just create the rules.
Timoni: Yeah. Yeah. Exactly. It’s to see a curious property of money. And I think also, again, just like the modern economics is just not very old. It’s like as old as the enlightenment and started trading in the rise of what is now the modern day corporation. You do what you want until a law stops you. Usually you can do what you want at least once before someone makes a law to stop you from doing it again.
Cole: Do you guys know this guy, John LeFevre?
Timoni: No.
Cole: John LeFevre has the Twitter handle “@GSElevator.”
Timoni: OK. OH! Yes. Yes. Yes. Yes. Yes. “Goldman Sachs Elevator Overheard.”
Cole: So he actually never worked at Goldman Sachs, but he wrote a book. He did run the syndicate desk for City, if I remember correctly. But he wrote a book called Straight to Hell: Deviance, Debauchery and Billion-Dollar Deals. All of that investment banking in the 90s. And it is… I highly recommend it. It’s very short, but a really good read. And you do make up the rules as you go along.
Timoni: It’s partly like interior social motivation. Like, obviously you want to win. People who do this are highly competitive by nature, etc. But also you do have a mandate to make money. However you feel you can best do that within the law, within your own ethical practices, or whatever you think. I was listening to Knowledge Project recently and I forget who was being interviewed, but they were talking about Enron and how the CEO thought of himself as a fundamentally moral person who was doing the right thing. People are going to laugh, but even his, I guess, pastor vouched for him as just being this fundamentally good person. And I think there comes a point, especially when you are authority, it means that you have the responsibility for multiple different large-scale entities, the corporation itself, to shareholders and then the people in that corporation. I can see where you think you’re really working in the best interests of all against all of what anyone would call conventional morality. And the whole banality of evil, yadda yadda. I get it. But like, I understand where this is not something humans are good at thinking through on a macro scale.
Cole: Yeah, I couldn’t agree more. You end up having to do some pretty gigantic mental gymnastics to get to what Enron did and say yep, there were ethics and good intent behind those decisions.
Timoni: Right. You can see how they got there.
Cole: Totally. C’est la vive.
Timoni: No, “So say we all.”
Alan: Interchangeable. So we just pulled into this…Oh my God!
Timoni: I feel like rowing a boat in choppy water.
Alan: We are in a boat of a truck. Scout camp is beautiful.
Timoni: Oh, so speaking of, you were talking about your first time doing virtual reality. My first…
141 つのエピソード
Manage episode 353483870 series 2763175
This week’s episode goes all the way back to last year’s Curiosity Camp, when Alan shared a ride with Unity Lab’s Timoni West and Vapor IO CEO Cole Crawford, recording a podcast along the way. The three discuss the challenges that will arise as AI begins to replace human workers.
Alan: In a very special episode of the XR for Business Podcast, we’re driving in a car with Timoni West, head of XR… Research?
Timoni: Director of XR in Unity Labs.
Alan: Director of XR at Unity Labs, and Cole Crawford, CEO of Vapor IO. So we’re driving on our way up to Curiosity Camp through these beautiful winding roads, and we decided that we would record a podcast, because Cole, in his incredible company building the infrastructure of cloud computing, they built an AR app to help service that. And I thought, what a cool way to use this technology and this time on this beautiful drive. Wow. Look at the size of those trees.
Timoni: They are enormous.
Alan: Oh, my God. Wow. Well, anyway. Timoni, how are you doing?
Timoni: Excellently. And I’m also enjoying the view. Yeah. Yeah, actually, Cole, I’m really interested to hear more about why you chose to go with that, and what the process was like. My team is working on tools for mixed reality. So for Unity itself, that’s used to make, I think, 90 percent of all Hololens applications right now. Century is using Unity for that. But the tools that we’re making today are allowing, I think, for you to more easily make robust, distributed applications that can work across various devices and for various users.
Cole: And that’s very needed. First off, Alan, I just want to say, you sound like you should be a podcast DJ.
Timoni: So it’s cool that you are.
Cole: Well done. But yeah, I mean, the issue for us when we started down this journey was very much a question of, how robust can we make an experience, about how widely could we make that experience? And the vertical integrated solutions that you had to choose from in the early days of AR/VR, I think, are primed for disruption. I’m super glad to hear that Unity is working on the open APIs, etc., needed to bring this technology to more users, as I’ll quote — maybe a little cliché being where we are and where we’re going — but–
Timoni: Yeah, I want to hear it. What is the problem you company solves?
Cole: Yeah. So we have to think about not four, but 40,000 different data centers; we’re an edge computing/edge data center infrastructure company. And with that means you can’t Mechanical Turk what was originally done in data centers. It works with four buildings. It doesn’t work with 40,000. So we had to build autonomy into every aspect of our business, in every aspect of the infrastructure. And that means building really simple interfaces for what would otherwise be really complex problems. And at scale, from a logistics supply chain — remote hand, smart hands, all the things that you do in data centers — what that means is your FedEx guy, your U.P.S. guy, a contracting company that otherwise would need specialized training, now it’s visually assisted capabilities for what would otherwise be a job that you would train for and then go work in a data center. We simplify that.
Alan: So basically what you’re saying is that you’ve given real-time tools to anybody to be an expert on the field, in the field.
Cole: It’s fair to say that the software is the exper, and what you need are opposable thumbs,.
Alan: Haha! Which democratizes the whole need for training.
Timoni: You know, it’s funny; I was just getting drinks with someone from Open AI. He is working on the robotic hands with opposable thumbs. I wonder whether or not that’s really necessary. It is a tremendous challenge. Okay, so from what I got from our earlier conversation, someone goes to a data warehouse, they’re looking at a… “RU,” you were saying?
Cole: A rack unit.
Timoni: A rack unit. Yeah, right. Yeah. And they get some information that comes up that says if it’s broken, if the client wants it serviced or just repaired or replaced entirely. So anyone can have the Hololens on and, using an image marker, know what is contextually needed for this particular server RAC. An advantage using augmented reality for this versus just having a bunch of displays is that the monitors don’t break; if a Hololens doesn’t work, you get another one. That’s awesome.Is there any other use cases that you’re using augmented reality for? Or virtual reality? I like seeing the warehouse at scale, etc.
Cole: Absolutely. Yes. And some of the work that you guys are doing I think is incredible. If the Hololens breaks or if a Magic Leap breaks or whatever the hardware happens to be, to go back to that cliché quote, Mark Andreesen said, “software is eating the world. If something fails in hardware, you should be able to take out your phone and have that same experience.”
Timoni: Exactly.
Alan: I think that’s the real key to scale.
Cole: It has to be, right? It has to be kind of a “bring your own device.” You have to get to that point.
Alan: Well, even the new glasses. So Nreal launched their glasses this week in AWE and their glasses plug in USB C into your phone. It’s using the processing power of your phone to give you really, really good heads up AR, and it’s positional tracking and everything, just kind of very lightweight pair glasses for $500 bucks.
Timoni: And the image quality was really great. I mean, the field of view, obviously it’s constrained to the glasses. But it fits so nicely in the frame, I was super impressed when I tried it out.
Alan: They’re very lightweight, comfortable.
Cole: And this is what’s amazing about Silicon Valley today. I mean, I’m just reminded of where we are, and I just wish you guys can see this–.
Alan: You know what? I’ll take a video of us driving up and we’ll put it in as a gif.
Timoni: It’s just like Lord of the Rings.
Alan: We’re driving with thousand-year-old redwood trees going up a giant mountain in the windiest road you’ve ever seen.
Timoni: Talking about AR.
Alan: In an enormous tank of a truck.
Cole: It’s true. But the pace of innovation, if you think back — Alan, you and I were chatting earlier about your first experience of VR — and I just–
Alan: Actually I got to say this; my first experience in VR was at Curiosity Camp five years ago, and we’re on our way to Curiosity Camp now.
Timoni: Oh, that’s amazing.
Alan: And then Chris Milk put VR on my head and I had this kind of “aha, come to Jesus” moment when I was like, “this is the future of human communications.”
Cole: And isn’t that what got you into tech?
Alan: Yeah.
Timoni: That’s… Wow. That is so cool.
Alan: Well, I was in tech before, but I made DJ stuff. So yeah, a little bit different.
Timoni: It’s all coming together now.
Cole: It totally is. Convergence, you know?
Alan: You were to talk about kind of the speed of acceleration of technology? And I think people neglect– because maybe they’re working in IoT, or are working in cloud computing, or they’re working in 5G. But if you take a really 10,000-foot view, they’ll all work together. And the fact that they all work together and they’re all in their infancy now, but all maturing at the exact same time. You have IoT, 5G, quantum computing, cloud edge computing, you’ve got blockchain, VR, AR, all at the same time.
Timoni: Also, I truly believe that — and this has been coming up a little bit more slowly than perhaps that you just described — but the moment we started really having sensors on computers and being able to make sense of that data, apply semantic analysis to it — that is another turning point in computing. That’s the equivalent of going from the mainframe that’s the size of a room to–
Cole: I got chills, that you actually said that. Yeah. It’s great that you did.
Timoni: Yeah. That is… wow, you’ve really got goose bumps… this is the next great thing, having all this world information and then having computers able to understand what’s going on.
Cole: It’s a hundred percent right. Lord Kelvin — just a little history — Lord Kelvin, if you know the Kelvin scale.
Timoni: Yeah.
Cole: He said to measure is to know; if you can’t measure it, you can’t know it.
Timoni: I love that.
Cole: It’s a really cool quote. But think about what we could do and what we have access to. Alan, you mentioned 5G. What we have access to in 5G is a network that is as real as the fiber optics in the ground. With speeds that are the same. So from a latency perspective, human eye can see 150 points vertically and 180 points horizontally. And at every point there is a level — you can see about 200 points of data — it’s chemical.
Timoni: And there’s different resolutions.
Cole: And different resolutions. But you take some mild compression associated with that to deliver a 4K experience to each eye.
Alan: And then foveated rendering.
Cole: Its refresh rate. You’re talking about 10 gigabits of data per second per eye.
Alan: And then that’s not including what you’re collecting from the sensors from the outside world to make it all synchronized.
Cole: That’s 100 percent accurate. So I mean, think about what you can do with the contextualised data with the real world.
Alan: That’s all I think about.
Cole: Yeah, it’s incredible, the capabilities we’re going to have over the next five years as these new networks come out.
Alan: It’s super human.
Cole: It’s going to change. It’s going to change the way humanity interacts with each other.
Timoni: Yeah.
Alan: I can’t wait till we go to a networking thing and everybody’s — it’s facial recognition — and it puts up their their names, and who they are, in front of them. Because I don’t remember anybody’s bloody name. That should be the new name tag. You wear these glasses in, everybody’s face pops up.
Timoni: I just said this earlier, but I’ll say it again for the record for the podcast: If you meet me and we’ve met before, and I don’t remember your name or your face, I’m so sorry. As soon as you start talking about what you worked on, I swear to God I’ll remember that part, because that’s always the coolest part; hearing about all the cool shit everyone is doing. I also love that my group has something — it’s a little clique-ish-sounding — we say like, “oh yeah, that person, they kind of get it,” right? What I mean by “getting it” is that we have a shared similar vision of spatial computing outside the bounds of not just talking about augmented reality or virtual reality. Those are components, those areR displays in this larger ecosystem of network computers. They run on edge that are all consistently talking to each other, and had this world information. To me, I don’t want a computer to be a single contained piece of hardware anymore, nor is it, really. Every device I have is networked. But I want to live in a world where computers sort of surround me in the most intelligent and privacy-sensitive way. But really, just sort of customizable to the point where I can wake up in the morning and have computers help me along my day in the way I want them to, as opposed to having a phone, then I have to pick up the haptic glass, or not having my Sonos talk to my shoe lights or whatever. I really want the whole thing to be the computer.
Alan: It’s interesting. I wrote an article recently on BCI and AI coming together as a bi-directional brain computer interface. So, being able to insert a chip into a brain so that you can hijack all the senses. I talked with this example of your walking down the street, and you start smelling cookies and coffee and it gets stronger, stronger because you’re getting close to a Starbucks.
Timoni: Or it hides it so I don’t have to eat the cookies.
Alan: Exactly.
Timoni: Let me give you an example. I’m not talking so much about… like, BCI is so cool, but do you really want everything?
Alan: I don’t know.
Timoni: I’m having trouble with that. But I gave this example in my talk yesterday, and I talk about all the time; when I wake up in the morning, I want to have little snippets of display UI that are kind of scattered around my home. And it could be a projector, could be glasses, or it could be a branch of them or whatever, that are all just little subsets of a larger computer session that is happening in the cloud. And I’ve customized and put these things where I want them. Oftentimes there’s no visuals. Maybe I’m just talking to the computer that is in the house. Maybe I’ll have cameras all around the house. Oh, side note; some people will say things, like, “is it really going to put a bunch of computers and sensors in your house?” A hundred years ago, nobody had electricity, and we either retrofitted or were willing to take out pipes and put electricity in all of our homes.
Alan: I think everyone is going to have a 5G repeater in their house.
Timoni: We’ll build infrastructure as long as there is enough value to us. And as long as we trust it enough that we’re fine with it. And I think that’s really going to happen. I really just want computers to be distributed little snippets of things, like a great Internet of Things combined with the best types of displays you could possibly have in the moment.
Cole: As you say in your side note, you have to make the capital work. I’m reminded of the autonomous drones, autonomous cars, and the dollars that go into putting everything on board. And the way I see this at city scale is that, from a Silicon perspective, why are we putting $100 computers on 1996 sensors?
Alan: It makes no sense.
Cole: It makes no sense.
Timoni: Yeah.
Alan: It’s because we don’t have 5G yet.
Cole: Exactly correct. Exactly right.
Alan: But here’s the thing. There’s so many great use cases for 5G and headsets. And up until… the problem is nobody’s going to have a headset for another five years. Maybe more. It won’t be a thing that everybody has and it won’t be really good enough — in my opinion — for consumer scale. So for now, it’s enterprise. But what can we do with the phones that still give people superpowers? And I think that’s a really practical thought experiment where you have a device in everybody’s pocket. The 5G ones are gonna be epic. And what can we do with that that we can’t do with what we have now?
Timoni: I really don’t want to be running most of my compute power on local devices. Well, Edge, sure, fine; but I don’t want to have an individual application that I must add onto my phone, or augmented reality just simply won’t work. But I don’t want it anyway. Like Google Studio, for example, of being able to run enough frames in the cloud.
Alan: That’s pretty cool.
Timoni: Have them go down to the device. That’s more effectively what I want. And the advantage is that if we all have these compute sessions running in the cloud or concurrently running apps, if I want to send you something, your app’s open, and my app is open, it’s just a better way to compute.
Cole: One hundred percent. And I don’t know, you guys might be shocked to learn this, but I’m old guard telco. And the reality is–
Timoni: By the way, he looks very young. This is a very strange statement.
Cole: Very. Yeah, this is like the late 90s. But we’ve built this thing called the Internet backbone, which we all know on the wireline side. But what a lot of people don’t know is that on the wireless side, we built — fundamentally — four different networks. We built the modern networks as we know them today with AT&T, T-Mobile, Sprint, Verizon, etc. They all built their 3G networks and their 4G networks, etc., and we’re all plugging into this big Internet backbone because that’s what you do. If we’re on our phones and are watching YouTube, we’re going to Amazon or Netflix, — whatever it is — we’re back on the wireline side. We have to get from the phone and the modem and the phone to some radio substation that’s mounted on a macro cell tower site. Take fiber optic cable back to a data center somewhere. And it is so not optimized. I mean, you might be shocked to hear that it is not.
Timoni: I always imagine the building that Netflix had to build in New York. They had, like, Radiolab podcasts about it or something.
Cole: Yeah, yeah. You don’t get better at delivering these experiences by algorithms, because you’re not going to algorithmically speed up the speed of light. This is really where edge comes in. Right? If latency is a function of proximity, all of a sudden, you need to move the compute as close to the device or the user as possible.
Alan: But when it’s not…
Cole: [crosstalk] I’m all for it.
Alan: I don’t think people are ready for this, because once we figure out edge computing at scale with all of these other technologies at the same time? I don’t even know.
Timoni: What are you worried about?
Alan: So here’s what I’m worried about, and it keeps me up at night: AI is going to really quickly replace large swaths of jobs, mainly accounting jobs and data sciences, and huge office towers full of people that are literally doing spreadsheets are gonna be wiped out because they don’t need that anymore. Lawyers — entire swaths of lawyers — are gonna be gone (which I don’t think any of us are really going to complain about). It’ll be the law firms with the best AI that will win. And that’s a whole job category gone. And it’s not that there won’t be other jobs, there will be a transition. But the transition is going to get quicker and quicker and quicker. And I don’t think we have the infrastructure education-wise to retrain and reskill people fast enough, which is why VR an AR is needed for education and training. Your use case is literally paving the way. That’s why we want to do this as a podcast, because you literally have built something that will be a case study for years to come.
Cole: Look, we hope so. But I think to your point — and you could do a whole separate podcast; in fact, we should do a whole separate podcast on the political implications to this. Part II! — do we tax computers? If narrow AI is replacing sort of these first-level jobs in these companies, do we tax them?
Timoni: Okay. So if in the future, we have AI that does genuinely replace human workers — and I have some reservations about whether or not that will happen, that have nothing to do with the tech and everything to do with socialization — I think that if we tax the AIs, does that lead to the window tax problem? In the 16th century, “let’s tax people by the number of windows they had,” and everyone then bricked up their windows to avoid taxes.
Alan: I think we just have to have a restructure of the tax situation so that corporations pay higher tax than individuals, because what’s going to happen is individuals are going to have fewer and fewer long-term jobs and will be more the gig economy, which if we can fundamentally teach the younger generations how to use the tax loopholes by incorporating themselves and using the tax loopholes, then you’ve actually kind of artificially changed the tax structure around; because right now the tax structure is based on taxing the individual at the highest rate, and corporations get all the breaks. Well, if individuals start acting as corporations, then you get the breaks and the government will take a while to catch on… wow, that’s beautiful. We should stop there and take a photo.
Cole: Do you want to?
Alan: I think we should.
[Intermission]
Timoni: So one thing I point out is that through most of human history, we did not have careers.We did not have salaried jobs; only the landed gentry could be assured of this consistent sort of income. And while there were other social structures in place to make sure that most people reasonably knew where they were going to be able to eat or, you know, it was more that the vagaries were natural, not social. This whole return to the gig economy means we were only changing the social structures from the last hundred years. So while people might think of it as this long-term system that is fundamentally changing how humans are. If anything, the last century was the blip and this is a return to the norm.
Alan: I think we can fix making women who just had a baby work after they’ve gone… It’s crazy. We should be celebrating that and making sure the parents have as much time with their children as possible because that’s what makes the whole thing better. Not making people work 80 hours a week.
Cole: A hundred percent. If the computers, if you can tax them, and maybe.
Timoni: Lessen the human burden, is what you’re thinking?
Cole: Well, look, if there’s more time for us to do the things that we care about — not saying that we shouldn’t care about our jobs — but some of the things narrow AI can accomplish alleviate some of the pressures on how we would train, how you would optimize for that function to be done by human. Does it not make sense if we’re taxing the computer that we create some universal basic income?
Alan: One of my friends, Floyd, is a huge proponent of basic income and it’s something we have to. I think nobody’s going to go for it. First of all, in America, it’s not going to happen. But what we can do is we can change the tax laws to tax the corporations to basically redistribute and give services to people and make it so… because tax was there to serve the people, not greedy corporations, that only government… something got off track in the world. It’s not just America, but the world. I think too many people let bankers get away with stuff.
Timoni: I think it’s gonna settle down. The modern economic structure is not even 500 years old, not even. I think we’re at a weird sort of inflection point and people will start over the next few generations. I always like to think very long term. Over the next 60 to 100 years, I think we’ll start to calm down again. I just… dystopias rarely happen, and they happen in blips and they don’t usually last that long. I don’t know. And just one more thing. If it turns out that we actually in the end should have a series of large scale-companies running the world, that might not be terrible as long as the companies are set up correctly.
Alan: I agree with that.
Timoni: A lot of people are going to be like, “no, corporations should not be in charge of anything!”
Alan: Here’s my challenge to that: What is the one measurement of the success of a corporation that the world uses as a standard?
Timoni: Today, you mean?
Alan: Yes, right now.
Timoni: Shareholder value.
Alan: Economics. Yes. How much money? That’s it.
Timoni: And that has always been–
Alan: It’s artificial shareholder value. Like, we drive the share price of companies up based on nothing and drive them down based on nothing.
Cole: Not “nothing.” So obviously there’s a microcosm that exists in Silicon Valley, but for a publicly-traded company, its one of two things. Either a dividend payout, or you’re paying off your debt and growth. That’s what it is.
Timoni: But that has been the case for any social structure humans care to name throughout all of human history. You either grow or you’re stagnant and you don’t grow. Corporations just happened to be like a very close, tight-loop version of any society. Could have been a kingdom, could have been a tribe. What have you. Humans: we’re consistent.
Alan: We are that.
Cole: I feel like the hardware is more consistent. The software… I often think of us as 1st-century hardware running on 21st-century software.
Timoni: Yes, totally. Totally.
[Intermission]
Alan: So we just got back in the car, we stopped on the side of a mountain to take some beautiful photographs and we’re back. I don’t know where we left off, but let’s think about this. We were talking about the future of work, how technology is going to fundamentally change, how we train and educate people. But also…
Cole: We were talking about what’s the role of… maybe this is a question I’ll ask you guys. What is the role of the first-level citizen when narrow AI is starting to–
Timoni: Actually come into play in the factory, and becomes a worker?
Cole: Yes. Yeah. Actually becomes that worker.
Alan: The thing is, it’s not going to be overnight that it becomes the worker. What it’s going to do is slowly, one by one, take large swaths of tasks.
Timoni: So… and two things here. First, AI is a [expletive] misnomer. People are like, “cool, an artificial intelligence.” Nope, this is a heavily-curated and single-purpose, like, basically extremely good algorithm that is designed to do like one single thing. And it might be like, find cats with whiskers versus don’t find cats with whiskers. Right?
Alan: But it could also procedurally generate digital humans.
Timoni: That is not going to happen for very long time.
Alan: Or buildings, which I’ve seen.
Timoni: Oh, you mean like an AI that makes the procedurally-generated buildings?
Alan: Exactly.
Timoni: Sure.
Alan: So the content for something that would take a content artist — a 3D rendering artist — maybe months to build a scene, now they’re procedurally generating this stuff in seconds. So that’s interesting.
Timoni: So you touched on this a little bit earlier. I mentioned this very briefly in our last round of podcasts. But there’s a social component, right? So I have had the good fortune to meet a lot of people who make top-tier, triple-A content for movies and for games. These are the people who will obsessively work, like, thousands and thousands of hours to bring you the final scene in Avengers. And even if they have procedurally generated content, the reality is they always feel like they can just finesse the shit out of it, definitely. And what I see is the creators getting more and more picky. I always go to the post-production talks at SIGGRAPH and they’ll talk about how, like, in Blade Runner, how long it took them to get an artificial human looking good — the Rachel character — and how they had to argue with the director because he wanted the scene in Vegas to have no blues in it whatsoever. And they ended up creating this new type of filter for the camera that had no blues; the director saw it and he was like, “no, I still see some blue,” and had to literally prove that there were no blues in the scene. Now, this is on top of all of the best tech; like, these are the highest-end effects houses in the world. These are the ones who are really pushing the limits and they’re working together. It’s WETA, it’s Rodeo. It’s all of the other greats. And so what do they do when they have these tools and these great machine learning algorithms, they get more and more picky and so on.
In Infinity War, they literally had, I want to say like five petaflops of data, because they scanned every single character cropped in-scene in that whole movie so could have it in post. And OK, so sure, maybe at some point these will actually be replaced by AIs. But I feel like humans inherently don’t trust the machine enough, or we just want to get our hands dirty. And with lawyers, too. Sure, you have an AI that can make a better decision. But the reality is humans do not make decisions based on data. We use data to justify our predetermined decision processes.
Yes, but if we did use data, we’d be actually way more effective. Honey, I know. So that’s the thing. If if a law firm figures this out and says, hey, wait a second, this AI is outperforming our lawyers 10 to 1 because it will on everything, because IBM Watson can read 5 million case files in an afternoon, making the lawyer read five maybe. But what are you trying to do? You’re trying to set a precedent, be looking for precedents. Maybe you’re looking for. You could scan the entire country’s records, looking for precedents in seconds. But who created a.
Cole: So I always come back to the the whole, “do computers dream of electric sheep” and the morality. Look at the divisiveness going on in social media politics right now. It’s been determined that the coders that write this stuff up, they have cognitive bias and they write that into their code. And the code gets trained and it becomes biased.
Timoni: And that’s how you end up with a hella racist [AI].
Cole: [Laughs] that’s exactly correct!
Timoni: But here’s a cool thing. I actually love this about the AIs; when we are concerned about bias and data that we need to de-bias the data — and honestly, we’ve only begun work on what that even means — what does it mean to be biased? What is a cultural bias versus a universal human value that needs to be removed or cleaned up or gotten rid of? Then also it forces us to think about ourselves. And I really love that if you meet a machine learning algorithm that happens to be biased, that teaches you something about yourself. Right? In the cold light of day, if something was you and did the same thing five billion times until it trained up to, like, the essence of you? I think that’s a great learning tool. No one’s gonna think about it like that. And now it has. They just sort of talk about it and reactionary way. But there’s some real value to that. I actually love the idea of having this sort of listener that I can talk to that helps me work through my biases, because it can see where any individual’s action I take or statement I make, like, where that can go, taking to its logical extreme.
Cole: Yeah, I don’t know if humans are ready yet for it for the self-reflection that would take to actually get over it. I think we live in a world of naive realism.
Timoni: You know, I love that. Yeah. I think you’re right. Well, it’s true. For example, I’m a designer by trade. Did UX for years (we called it information architecture before). I studied literature in college. And we always viewed literature through what we called it at the time, “different critical lenses,” which I now know are just different mental models to a different context, or at least that’s how I would describe it. And there are a lot of people online. There are entire communities around rationalism and mental models and really trying to get to the right decision. To your point. Like we don’t care about being… let’s see, what’s a good way to put it?
Alan: We don’t need to feed people’s egos. We just need the right answer.
Timoni: Exactly. It’s what is known about being right. It is about seeking the truth and finding the truth. And this goes all the way back to the pragmatism of William James, early 20th century. Well, he probably even before that. I don’t know. I do feel like we’re… maybe it’s just because I live in my tiny little rationalist bubble, but I do see more and more people talking about this stuff and interested in this stuff. And I can’t help but think inherently most people would rather be right than… I don’t know. You’re laughing.
Cole: No, I think you’re optimistic. But I think you’re right. I just think it’s going to take some time.
Alan: I think this road has gotten sketchier.
Cole: Yeah, it’s gotten narrower, for sure.
Alan: The road went from two lanes in a winding road to one lane.
Timoni: It’s very s-curvy right now.
Alan: It’s pretty beautiful.
Timoni: And it’s right dappled, like the sunlight as we as we go further into the woods.
Cole: Right. We’re going further down the rabbit hole.
Alan: Yeah. This is great.
Timoni: So getting back to machine learning and artificial intelligence, I do think, as I mentioned earlier, I really want to see people starting from where they want to end and start with what their vision is. And then we can work backwards from there to figure out what could potentially go wrong. What I see instead is people sort of being alarmist about what could possibly go wrong with no real end in sight. And that just always ends in this kind of dystopian picture of, well, imagine a world where people know your every thought, emotion, and hope, and therefore can constantly feed this to you.
Alan: I’ve got news for everybody listening. Guess what other people are thinking about you: Nothing. They’re thinking about themselves, really.
Timoni: I mean, it’s true. Marketing is designed to manipulate you. But there’s given a population, too. Right?
Alan: “Make better health decisions. Exercise. Think positive things.”
Sure. But if I’m at Disneyland, which is a gigantic mega corporation and I’m waiting in line for Indiana Jones, the line is designed to make me feel like it’s taking less time than it is. That is a great example of good environment design that does, in fact, manipulate you. And yet, it’s to everyone’s best
[interest]
. You can’t make the lines shorter. Right? So why not make it more pleasant along the way?
Alan: It’s a great analogy. OK, so let’s go deep down, since we’re going down the rabbit hole. Timoni, what is your vision for the future?
Timoni: So I would like to see a world in which everyone is able to use computers to the best of their abilities, imagination, and intelligence. Right now, people are using computers all the time. We talk to computers more than we talk to any individual human throughout the day. And yet we have this sort of siloed set of experiences where people can do a task per application. Part of this is just due the nature. Then I have a niece, for example, who is on TikTok probably six hours a day. But if she wanted to describe and illustrate and animate a dream that she had last night, she would have no ability to do this. I think there are several different ways that we can attack the problem. First is to make creation tools that are easier to use, which I think we’re continuing to evolve, and AI can actually help with that with procedurally-generated things. But I also think that we just need computers to be able to listen and react to humans specifically. We do not have operating systems right now that can listen for what I call “the no.” If a computer does something I don’t like, if an application does something I don’t like or don’t want it to do, there is no “no.” We see this slightly with notifications where it’s like, oh, do you want less notifications or to turn them off? We’ve had computers for 60 [expletive] years and that’s as far as we’ve gotten?
Alan: Turn off all your notifications. It’s actually liberating.
Cole: Yes.
Alan: Turn them all off. You’re going to check the apps anyway.
Timoni: I do the same thing.
Cole: How about a better life hack: leave your phone at home a couple of times a week.
Timoni: Oh, interesting.
Alan: Wow.
Timoni: You do that?
Cole: Yeah.
Alan: Do you really?
Cole: Yeah. Yeah. I mean, yeah.
Timoni: Are you carrying an iPad? Are you cheating here?
Cole: No it’s so good for your mental health. I mean, I’ve not had cell phone service on my phone for 45 minutes now and I’ve not had to look at it. So no, it’s not there. Or if you can’t leave it at home while you go take a hike in the woods with no spectrum, go to dinner with friends. Hang out with real people. Put your phone underneath the salt shaker and whoever picks up the phone first–
Alan: Pays the bill.
Cole: Pays the bill! Put it down.
Alan: Yeah.
Timoni: That can lead to some lengthy small talk after dinner.
Alan: “We’re not leaving, but the bills getting higher.”
Timoni: That’s cool. But any case, getting back to a vision for the future. I want to have my computing session in the cloud. I want apps to work interoperably. I want to have a series of continuous… like, in my house. I have pasted a little digital interface here, a little digital interface there, I put up a little inventory. And it’s got all the things that I want. And I can combine them in any given way to do whatever it is I want to do on a computer. Actually, a RISD… I think MFA student recently posted a concept OS that he made called Mercury. I highly encourage you all to take a look at it. It just came out, I think on the 28th. If you just do a Google search for Mercury OS, it’ll come up. And he had rethought the concept of an operating system as a series of stated tasks — like “I want to check my email” — and then everything that you would need in order to effectively check your email, which does not just include your inbox, comes into a set of containers and this is called a flow state. So you’ve got your inbox, but then maybe you’ve also got your calendar open, and your map open, and your to-do list–.
Alan: Ahh, the tabs that you need for that.
Right. And you can drag and drop in between all of these different what we now think of as applications. But if you remove the data layer itself from the container, from the visualization, then you have a really robust way to interact with the computer and digital objects, and in a way that makes sense for you at the time, in the mode that you’re currently in. The cool thing about this for augmented and mixed reality is that it makes no difference if you’re doing this on a 2D screen display or if you’re doing this in a headset that is also showing your 2D screen, or a 3D screen, or a 3D object if that makes more sense, depending on what you’re doing now. This is really essential for augmented reality that we start to remove the data layer from the container layer, because if I am in augmented reality, if I’m looking through my Hololens and I have two apps open with two cubes that look identical, one in each application, I cannot — and I can’t expect any user — to context switch between what one app does and the other app does. Like, if one of them is a modelling app and the other one is an interactivity app and I’m dealing with the same cube, it needs to be the same cube in the same place. So what I’m talking about is, like, far afield; 50 or 100 years. We’re going to have to rethink computers, but we’d start talking about it now. We’ve been talking about it since the early 2000s. Let’s just continue to push this idea forward.
Cole: Quantum, baby. It was only a matter of time before quantum computers came into this podcast.
Timoni: Ok, so let’s talk quantum.
Alan: Let’s talk quantum.
Timoni: Should we pause and quantum later?
Cole: I think we should quantum later.
Alan: So, part 3 of this podcast is going to be quantum later, around the campfire.
[Intermission]
Cole: …government was any block as part of a blockchain. Now all of a sudden, you are in control. You have an immutable record of truth.
Alan: And you can cancel it.
Cole: Exactly. You can expose…
Timoni: So for context, we’re talking about why is it bad to have your data collected? Why do we keep hearing people say, “oh, but what if the insurance agents know that I have cancer before I do and then [expletive] me over?” The answer to this question as they shouldn’t be able to [expletive] you over. Right?
Alan: That’s the simple answer.
Timoni: In the world that we want to live in, it should be great that everyone knows you have early stage cancer so you can fix it. So now Cole’s talking about this concept to the smart citizen who has ultimate control over their data.
Cole: Yeah. And I just think it’s going to take… so A) It goes back to what you talking about before, which is how do we get corporations to realize that the data they collect is for the benefit of all mankind? And that’s hard to get them over because right now the dividend or the capital power or the debt pay off, etc. or the growth is what these guys get rewarded on today. So I think it takes a–
Alan: What if — ready? What if we had a new system and it was an education-based system that, instead of charging people to learn, you got paid to learn? Every time you learn something–
Timoni: What is this, Scandinavia?
Yeah, well, I come from Canada — we have socialized everything, c’mon. But we paid you to learn. And so, little micro currency; five minutes of reading gets you five points or whatever on you on your blockchain ledger. Right? But what if that same system also took care of your health care, your insurance, your banking needs, everything that you need. Kind of like WeChat, but instead of one corporation owning it, as you progressed in the education component and you graduate, you become an equity shareholder in the company? And the company that has paid you to learn the whole time now is selling you all the services you need, but you own that company that’s selling you the services? So you’ve basically created like a shareholding system, but nobody can own more than anybody else. Everybody’s equal in it, and it automatically waterfall distributes the profits.
Timoni: What the profits are..?
Alan: The profits would come from a number of different ways. So the participants in the program are being educated and trained in mindset and maybe it’s a percentage of their income in perpetuity or something like that. But they always own this company. I don’t know what that looks like in the long-term, but the company itself can make products that are sold outside of the network. We can make products like a health care product; if you make the best health care insurance in the world for your members, other people outside are going to want it as well. And you can give it, make it available to other people, make it a huge profit center. They didn’t grow up in the system. They can only access certain services that are profitable to the system and the people. You have to go through the system to be part of it. You can’t after 18; you’re not allowed in. Like, you have to go through the system, graduate from it. Now you’re in and you’re in it for life.
Timoni: Sounds a little caste-y. I don’t know.
Alan: I’ve been working on different ways to solve this at scale around the globe. So then now you have global citizens all interconnected with the only purpose of helping each other.
Timoni: So one thing that has always puzzled me is why doesn’t socialism work unless it’s kind of sneaked in? Like, socialism is a great idea, actually, I think as a concept — like, on paper. Absolutely, this is sort of a tribal communal way humans kind of inherently want to work and think anyway. And yet at scale, when people claim they’re going to start socialism or a communist country, it always ends up being a cult of personality. Right? It always ends up actually being a fascist society instead. And yet they call themselves that.
Alan: Because it’s usually by some egotistical leader.
Timoni: But why do you think that is?
Alan: Well, why do you think you have the president you have here? The public can be easily swayed by showmanship and flashy shit.
Cole: Is it fair to say that power corrupts and absolute power corrupts? Absolutely.
Alan: I see what you did there.
Timoni: But why can’t we just do socialism from the get-go as a stated goal? Sans cult of personality.
Alan: This is what I’m proposing; a new social exchange where everybody benefits from going into the experience economy. So we’re not going to buy cars. We’re really not going to need to buy houses. We can live anywhere. Imagine, you don’t need to own anything, but you need to have access and experience everything. And so what if part of the program was experiences? And as you educate yourself more and became more valuable to the organization, you got invited to more and more experiences?
Cole: Yeah, I think this has to start. So today we talk about the… I think it’s up to 75 billion connected devices by 2025? Ridiculous.
Alan: 44 zettabytes by 2020.
Cole: Correct. And 120 zettabytes by 2025.
Timoni: I didn’t even know what a zettabyte was.
Cole: A zettabyte is a thousand terabytes.
Alan: Thank God we have a data scientist.
Cole: Beyond the Internet of Things, I think as you pointed out, the experience. So today we have what’s called the knowledge economy. I think after IoT, after the Internet of Things, we start thinking about the Internet of Skills. And with that, with the Internet of Skills, I think you’re going to finally get to a place where, end-to-end, you could build the proper incentives for contextualizing what’s good for a corporation in context of what’s good for a human.
Alan: And what if the only goal of the corporation was the well-being of the students and of the members and owners of the corporation?
Timoni: Have you read The Diamond Age?
Cole: Yes.
Timoni: The Diamond Age has corporations, as sort of citizen state economies where — with class systems.
Alan: I’ve got to read that.
Cole: Yeah, it’s good. That’s funny; I’m reminded of a company… do you guys remember, back in the early, early, early 2000s, around Napster and what was happening? It had prompted me to think about a company that it would — it would be illegal. — put your hat on for like 20 years ago and this is controversial. But could you build a company where if you were a monthly subscriber where that came in the form of some sort of stock in the company, you are a shareholder and you can provide that platform to your shareholders. Back then, could you do some kind of peer-to-peer network where, as a shareholder, you’re entitled to the content that sits on that network?
Timoni: Like Usenet?
Cole: A little bit. A little bit. So. But it takes… I mean, good luck. Have you ever read Flash Boys?
Timoni: Oh, I’ve heard of it.
Cole: Fantastic Michael Lewis book. And he actually has a list of podcasts. And he did a recent podcast, fairly recent, called The Magic Shoe Box. Really interesting. And it’s all about high-frequency trading and the latency associated with high-frequency trading. And a general by the name of Ronan Ryan, who who went and created entirely new stock exchange just to create fairness in the stock exchange. So coiled up miles and miles of fiber, got rid of the HFT guy. So it took no skill at all. The idea of stock exchanges, where you are informed enough about the mission of a company that you feel like you want to invest in what that company is doing and where it’s going, high-frequency trading came about. And a lot of these companies, just because they were one or two milliseconds faster, they just saw big buys happening so they could get in front of that. They could buy it, then sell it to the actual purchaser. Billions of dollars.
Alan: I think the major problems in the world can be solved by putting a leash on bankers, because they tend to make the rules in their own favor. And that’s how you ended up with–
Cole: Money makes the world go around.
Alan: You ended up with a trillion dollar bailout or other.
Cole: That’s right.
Alan: And here’s the thing. Many people don’t realize this right now in America. They don’t realize that the problem in 2008 with the subprime mortgages, it’s being done all over again with retail properties. Some of these big malls that are now empty because the big players have pulled out, the malls are dead. They’re all still being treated as if they were full of triple-A real estate rates.
Cole: So you know, the guy that ran that hedge fund–
Alan: Yeah.
Cole: –is that Curiosity Camp, today. Yeah.
Alan: Wow.
Timoni: All right. Well, we’ll talk to him.
Alan: Let’s have a little conversation about this.
Cole: He was the guy that went to went to Goldman Sachs and said, “hey, this is what we want to do and we can short all of this stuff.”
Alan: Well, he saw an opportunity.
Cole: If you guys are out for it, whether it’s at Curiosity Camp next year or else, I’m down to make this an annual special edition podcast.
Alan: I love it.
Timoni: This is awesome.
Cole: You were talking about, you can just create the rules.
Timoni: Yeah. Yeah. Exactly. It’s to see a curious property of money. And I think also, again, just like the modern economics is just not very old. It’s like as old as the enlightenment and started trading in the rise of what is now the modern day corporation. You do what you want until a law stops you. Usually you can do what you want at least once before someone makes a law to stop you from doing it again.
Cole: Do you guys know this guy, John LeFevre?
Timoni: No.
Cole: John LeFevre has the Twitter handle “@GSElevator.”
Timoni: OK. OH! Yes. Yes. Yes. Yes. Yes. “Goldman Sachs Elevator Overheard.”
Cole: So he actually never worked at Goldman Sachs, but he wrote a book. He did run the syndicate desk for City, if I remember correctly. But he wrote a book called Straight to Hell: Deviance, Debauchery and Billion-Dollar Deals. All of that investment banking in the 90s. And it is… I highly recommend it. It’s very short, but a really good read. And you do make up the rules as you go along.
Timoni: It’s partly like interior social motivation. Like, obviously you want to win. People who do this are highly competitive by nature, etc. But also you do have a mandate to make money. However you feel you can best do that within the law, within your own ethical practices, or whatever you think. I was listening to Knowledge Project recently and I forget who was being interviewed, but they were talking about Enron and how the CEO thought of himself as a fundamentally moral person who was doing the right thing. People are going to laugh, but even his, I guess, pastor vouched for him as just being this fundamentally good person. And I think there comes a point, especially when you are authority, it means that you have the responsibility for multiple different large-scale entities, the corporation itself, to shareholders and then the people in that corporation. I can see where you think you’re really working in the best interests of all against all of what anyone would call conventional morality. And the whole banality of evil, yadda yadda. I get it. But like, I understand where this is not something humans are good at thinking through on a macro scale.
Cole: Yeah, I couldn’t agree more. You end up having to do some pretty gigantic mental gymnastics to get to what Enron did and say yep, there were ethics and good intent behind those decisions.
Timoni: Right. You can see how they got there.
Cole: Totally. C’est la vive.
Timoni: No, “So say we all.”
Alan: Interchangeable. So we just pulled into this…Oh my God!
Timoni: I feel like rowing a boat in choppy water.
Alan: We are in a boat of a truck. Scout camp is beautiful.
Timoni: Oh, so speaking of, you were talking about your first time doing virtual reality. My first…
141 つのエピソード
모든 에피소드
×プレーヤーFMへようこそ!
Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。