Hi, I’m Bryan Cantrill. I’m the CTO of Oxide Computer Company, and we’re at the beginning of 2022 here. By the time you watch this, it will be February of 2022. It’s January as we record this, and I want to engage in an age-old tradition of predictions, of new year’s predictions. But I want to do that by predicting the present, and you’ll see what I mean there when I get into it.
But as technologists, I think we really enjoy predictions because they are so much a part of our livelihood. We make implicit bets based on our predictions of what will be, based on the technologies that we think will be important, based on the use cases that we think will be important, the companies that we think will be important. We bet on our predictions, and we live in that future. We are often developing the future. We are often developing things that are not yet available. But to understand our own predictions of the future, it can be helpful to actually look to the past and especially our past predictions of the future. What in the past did we think the future would be? And I find this stuff mesmerizing because we so frequently get it just wildly wrong, and occasionally we get it really, really correct. And it’s really hard to tell which is which, but it’s fun to go back and look at those past predictions. And for me, I’ve got … This is kind of personal because I was with a group of engineers in the early 2000s, and we all worked for a computer company, Sun Microsystems, now defunct. I have to clarify it’s a computer company because my 9-year-old daughter apparently thought it was a brewery. But we were all colleagues at Sun, and we started in 2000. We started gathering, a group of us, maybe a dozen of us that would gather at a dinner after the beginning of the new year, and we would make predictions. We’d make 1, 3, and 6-year predictions that we would then write down. And it’s really interesting to go back now and look at those predictions from now well over a decade ago. The last predictions we made were in 2007, and the 6-year predictions from them, from 2007, of course, are long since in the past. And it’s interesting to look at what our perspectives were at the time, and sometimes we got it right and sometimes we got it really wrong. The thing that’s interesting is we often got it right on trajectory but really wrong on timing. So what do I mean? Let’s go through some of those predictions. And before we do, though, I want to take a quick aside on dire predictions because something that is just part of the human condition is that we seem to love making really dire predictions, predictions of the coming apocalypse. And you can see lots and lots of examples of this, some that affect us in technology, “The Population Bomb”, a famous book by Paul Ehrlich in 1968, “Silicon Snake Oil” by Cliff Stoll in 1995. “Time Bomb 2000” was about the Y2K bug. Ed Yourdon who believed that the Y2K bug was going to be the collapse of civilization. And these dire predictions seem to be very compelling, and they’re compelling to read. Maybe they’re compelling to make. And in some regards, we kind of look back at these, and these three books are collections of predictions that are pretty wrong at this point. The thing that’s interesting about dire predictions is that they’re not totally wrong, that there are often some nuggets of truth in there. And what these predictions get wrong, and as you think to yourself about dire predictions being made, something to remember is that humans are really, really adaptable. And what each of these books got wrong is that they forgot about the adaptability of humans. So remember that humans are adaptable, and technologists are very adaptable. We tend to be especially adaptable. So don’t necessarily dismiss dire predictions completely, certainly not. And obviously we know that our climate is changing. We know that there are things that we can really reason about in terms of the future, that can be … represent some real future challenges. But at the same time, don’t just accept arbitrarily dire scenarios without taking it apart a little bit and understanding what’s behind it and how we can adapt as technologists. So just want to give a little plug for not getting too bummed out about dire predictions because one of the things that I look back at, going back through our predictions, is that the direst predictions, the most dire predictions were often really, really wrong. But what were some of our predictions? So these are verbatim some of our predictions from the early aughts. There was a 6-year prediction in 2000, and this is a good example of a prediction that is definitely right on trajectory and almost right on timing, that most CPUs have four or more cores. This is, of course, absolutely true today. Now in 2022, any laptop is going to have many, many cores, not just four but six or eight or more. But this was not exactly true in 2006, when this prediction was targeted. So right on trajectory, a little bit wrong on timing. This next prediction from 2003, so this is actually … Confession, this is one of my predictions. This seems like a brilliant prediction in 2003, to predict that Apple is going to have some must-have gadget. I even called it the iPhone literally in 2003. This is 3 years before the iPhone came out. It’s a combination digital camera, MP3 player and cell phone. So you might think that I am some genius that had seen the future in 2003. Well, there’s a … something you should know about this prediction. I was almost making fun of myself for making this prediction because I thought this thing would be ridiculous and that nobody would want it. So this prediction was correct, but it was also really, really, deeply wrong. And indeed, when the iPhone did come out 3 years later, many of our predictions were the iPhone was not going to succeed, that it was going to be … I think even someone said it was going to be called the “iFlop.” So just goes to show you that you can be right on a lot of things and wrong on some other key aspects. Another prediction that I really liked from 2003 … because I remember when this prediction was made. This was not by me, but one of my colleagues, when he predicted that Internet bandwidth was going to grow to the point that TV broadcasters became largely irrelevant. And I remember thinking, “No way. That is never going to happen.” The idea that TV broadcasters would become irrelevant, that we would get our television over the Internet … But, of course, this has long since been true for all of us, really, and it’s amazing the degree to which that became true, and yet it felt like an impossible future in 2003. Another, this is a bit of a dire one, one that did come true, albeit it took many, many years to come true, a 9/11-scale economic shock from a virus. This is, of course, years and years before COVID, a good example, by the way, of a dire prediction that was wrong for many, many, many years before it was right with … obviously with the coronavirus and the reason I’m coming to you remotely now. Some more other predictions, this is a great one from 2004 because you’d be right to not know what Friendster was, and, of course, there was never a friends.google.com, but a little bit of an early indicator of social networking. Another one that I love in 2004, the term long distance falls out of the Telco lexicon. This is a great prediction because it tells you that we were still talking about long distance in 2004. This is one of these things where I realize that I am very, very old because, of course, we don’t talk about long distance at all anymore. Here’s another great one from 2005, of spam turning a corner and becoming less of a problem than the year before. The reason this was a really interesting one is, the technologist who made this prediction was on the front lines, worked on a mail server and saw, was … very much worked with spam. There was a period of time when email spam felt hopeless. It felt like this was not going to be a resolvable problem, and yet it really did turn the corner. And in fact, it turned the corner right about 2006. This ended up being a really, really good prediction, really interesting prediction. Another one from 2006, Google embarrassed by revelation of unauthorized U.S. government spying at Gmail. That was very prescient in a way. There’s certainly been a bunch of these, where a bunch of scandals involving the government having unauthorized access to these large services, so revealing of sort of a zeitgeist, but the actual prediction is wrong. It didn’t happen in a year. Another interesting one, a 6-year prediction, 2006, volume CPUs are still less than 5 gigahertz. Now this might seem like an obvious prediction, but it was not necessarily clear when Dennard scaling was going to end. Dennard scaling is our ability to make the clock faster and faster and faster. And the person that made this prediction was not me, but it was a great prediction because they really nailed it. Dennard scaling did end in about 2006, 2007, and it did top out at certainly less than 5 gigahertz. Right now, the CPU that you run is definitely not more than 5 gigahertz, and it’s really not even more than 4, even likely 3. So we took a wholly different approach. It was beginning to come clear in 2006. Another one that I like in 2006 is the prediction of wireless 3D video eyeglasses with earbuds, the latest “it” gadget. So you’d be right to go check your Internet history to wonder when Google Glass came out. Google Glass came out about 7 years after this. So it ended up being a pretty good prediction, but only pretty good because my glasses are not currently … I’m not wearing a 3D video glasses. I think it’s fair to say that these are not ubiquitous. It did not become the latest “it” gadget, but it was a very good prediction. So why are we doing this? Why are we going back through our predictions? And to me, the thing that is so interesting about them is that they tell us so much about what we were thinking at the time. Predictions, I think, tell us much more about the present than they do about the future, and that’s a bit of a paradox. The other thing that I found pretty surprising in going through all these is that the longer-term predictions were much more likely to be accurate than the shorter-term ones. So the 6-year predictions were often … had a much higher hit rate. A lot can happen in 6 years. And conversely, the 1-year predictions were often wrong. One year is just not enough time. The kinds of things that happen in 1 year are often unpredictable. If you predicted SARS-CoV-2 in January of 2020, good for you, but that’s more lucky than good. It’s very hard to predict. Something that can change us in a year is very, very hard to predict. The other thing that’s kind of interesting … I say this because we were a bunch of infrastructure technologists sitting around making predictions, and, boy, did we totally miss stuff. Yes, we hit some important things. Yes, we predicted the end of Dennard scaling, or it was predicted by one in our group. Yes, we predicted, albeit mockingly, the iPhone, but we did not predict cloud computing or Software as a Service at all anywhere over that period of time. And what that reminds us is that now this is so ubiquitous, and even by 2010, 2009, even 2008, these things were starting to emerge. Certainly the AWS started in 2006, but it started with something that even Amazon viewed as a bit of a lark. So some of these mega trends can be missed by everybody. And it’s really interesting for me to go back and reflect on, what did we miss, and why did we miss cloud computing? Why did we miss Software as a Service? They do seem clear. I’m sure they were being predicted somewhere. I’m sure other people were seeing that future, but we certainly, as this group of technologists in the early 2000s, did not at all. And as I reflect back on the … kind of the missed predictions, what did we miss? Why did we miss it? The things that we missed were the ramifications of the broadening of things that we already knew about at the time. So when I look back on my own career … and, again, I’m long enough to remember long distance as being a concept for phone calls, so I’m pretty old. I’m a fossil at this point. But when I look back, the absolutely profound revolutions are things like the Internet. This … On the one hand, it is not a deep thought that the Internet is a revolution. On the other hand, I think that the degree to which that changed absolutely everything we are still understanding. With the rise of remote work in the last 2 years, we are seeing a whole new complexion of the rise in ubiquity of the Internet. We’re still understanding what it means to have this kind of ubiquitous connectivity. And of course there’s a dark side to that, too, right, which of course we have seen as well. In terms of for us in software engineering, another really big one is distributed version control. Distributed version control changed everything about the way that we develop software. Why did we, this group of colleagues in the early 2000s, why did we have no predictions around distributed version control? Because we were already using it. We were using a piece of software not very well-known, but a piece of software called Teamware that was the predecessor to BitKeeper which itself was the predecessor to Git. This is one of the earliest distributed version control systems, and it was already revolutionary for us. But the thing that we missed is that we were sitting on this island of distributed version control, that we were in the future effectively with respect to distributed version control. And what would it mean if everybody used distributed version control? Ditto, open source, we were users of open source. We had open sourced our own software. We were believers in open source, but I don’t think that we fully understood what it meant for everything to be open source. And we are, of course, now living in a future where if you take the Internet plus distributed version control plus open source, that’s GitHub. None of us predicted GitHub certainly, or the importance of that, the importance of the ability to share code online which is the way we develop software now. And even Moore’s Law, we understood Moore’s Law. We knew what it meant. We knew that Moore’s Law and Dennard scaling were different things. We obviously knew Dennard scaling was going to end. We knew that transistor density was going to improve. We knew that this meant multiple cores per die, but I don’t think we fully understood what it meant for that computing to become ubiquitous. I don’t … Certainly we didn’t really predict the power of the compute that we would have in our pocket with respect to mobile computing. Yes, we made glib predictions with the iPhone, but we didn’t really understand what that meant. And to me, it’s interesting to go back and reflect on this. So all of these things, again, we knew about all of these things, but we underestimated their transformational power. And maybe that’s the way we should think about the future. Maybe instead of trying to think about the things that will be developed … And by the way, we thought there were lots of things that were going to be developed that simply never came to fruition, whether it was around quantum computing or carbon nanotube-based memory which I was … I’m a … I’ve been in love with carbon nanotube-based memory for what feels like 2 decades, but it is not there yet, and we made lots of predictions like that, things that were predicting that main memory would be non-volatile which, of course, it clearly is still volatile. We’re still using Divs, right? We did make predictions about Flash with respect to spindles, but things did not happen nearly as quickly as we thought they would, and in part because we were trying to predict things that hadn’t happened yet. What we should actually be focusing on is, where are the … Where is the future today? Where does that exist today? And indeed I think this is the reason that many technologists actually find the future accidentally. If you ask someone, “Why did you go to work for this company that ended up being visionary, or this technology that ended up being really important?” Often it’s not because a technologist had some grand vision for what the future would be. Often, the technologist was just following their nose. They’re following what’s fun. “I did this because it was fun,” or “I did this because I enjoyed it. I did this because it was useful.” Speaking for me personally, when I … Part of the reason I went to go work for Sun is because I could see the importance of symmetric multi-processing, and I wanted to go be a part of that because it was fun and interesting and clearly useful in there today. I wasn’t predicting the rise of the Internet in the late ’90s and so on, but I was simply gravitating to that stuff that I knew was useful and then riding that technology as it became much more ubiquitous. Now, this does not mean that … certainly not. Hopefully my going through past predictions that are wrong, hopefully you understand that I’ve got the humility to know that this is not foolproof and it’s not formulaic. It’s actually, as technologists, it’s somewhat easy to find technology trends that are appealing to us as technologists but simply never broadened, that actually, as it turns out, we were just nerding out on something. It never actually becomes really interesting. It never hits that kind of point of ubiquity, and this is how we end up being so frequently wrong on timing. But we often, again, are right on trajectory because it just takes a longer time to broaden. So it’s very hard to predict what that time will be. It’s easier to predict that this will be important because we know that it’s important for us today. And, of course, this leads us to the very famous William Gibson quote that, “The future has arrived. It’s just not evenly distributed yet.” Now about that quote, I’m sorry. I just can’t resist taking a quick aside because that quote, it’s attributed to Gibson. It definitely … It conveys something that Gibson definitely believes, but the actual quote itself is of unknown origin which is kind of funny. It was never in an article that he wrote. It simply started being attributed to him. And sorry to go meta on you for a second, but I do find this to be very funny, that the idea of the future not being evenly distributed was itself not very evenly distributed. Apparently, Gibson thought this for many, many years before other people kind of figured it out. As he said, “This is an idea.” He almost views this idea as pedestrian. He kind of conceived of it, talked about it with friends. So it’s completely reasonable that it was attributed to him at some point in time. But the actual quote itself is somewhat apocryphal. So I think that’s kind of an interesting aside on Gibson’s quote, that the idea that the future is not evenly distributed itself is not evenly distributed, very meta. So let’s now predict the present. So what I want to take you through in the time we’ve got left are, these are not so much predictions for the future as they are things that I absolutely know, and I know because we’re using it every day, but they might not be evenly distributed. And if you yourself believe, “Well, of course, everybody knows that,” well, then okay. I … Everyone might know that, but that’s what I thought about distributed version control in 2000, or that’s what I thought about open source. That’s what I thought about a lot of these things that actually were not evenly distributed and became much more ubiquitous. So I want to go through a couple of these because they’re, again, they’re things that I know in my heart that … Because I’m living them, but you might or others may not yet appreciate. One is that compute has really become ubiquitous. The … An Arm Cortex-M0+ is a real CPU. It’s a 32-bit CPU. That thing is small, tens of thousands of gates. A Risk Five can be easily synthesized on an FPGA. We can generate real 32-bit CPUs, and we can put them in lots and lots and lots of places running on very small amounts of power, and what this means is that we’re actually replacing real compute, 32-bit compute with these legacy 8-bit microcontrollers, the 851. Thank you for you, the 851, for your fine service. It is now time for you to be retired and be replaced with a real CPU. And this is important because if you have a real CPU, you can actually have a real operating system. And if you have a real operating system there, you can actually do real stuff on that compute. So we’ve got the … You’ve got these microcontrollers that are much faster than my first computers were, even as a professional. You’ve got these things running at 400 megahertz, that you can buy a microcontroller on an eval board for $20 with a 400-megahertz CPU. That’s amazing. That’s a lot of compute, and we’re seeing that now everywhere, and that’s a lot of opportunity there when you’ve got really powerful compute. Maybe that’s powerful compute close to the NIC, and you see that with SmartNICs. Maybe that’s on the spindle. Maybe that’s in the world, right, the ability to get, certainly with IoT, but the ability to get compute in more and more and more and more places. Moore’s Law, after all, is not just about increased transistor density but also economics and making those transistors cheaper and cheaper and cheaper, more and more ubiquitous. And this is amazing, that we have so much compute so broadly available, and I think it’s going to have really profound opportunities for change and improvement. Open FPGAs, so FPGAs are field programmable gate arrays. These are also known as soft logic. This is the ability to change the actual … dynamically change the gates on a chip, so you can program it to do arbitrary things. The line between software and hardware really blurs with FPGAs, but FPGAs have historically been entirely proprietary. You’ve been … Historically, you’ve been entirely dependent on proprietary tool chains that are completely closed, but thanks to Claire Wolf and her terrific work of reverse engineering the Lattice iCE40 bitstream, and other bitstreams have been reverse engineered since then, we actually began to get truly open FPGAs. What do I mean by that? I mean an FPGA where you can synthesize the bitstream, where you can synthesize what you’re going to program onto that FPGA with 100-percent open-sourced tools. This is a really, really important revolution, and it goes back to why open source was so important. If you’re of my vintage, you’re old enough to remember closed proprietary compilers. Software engineering did not stand on its own two feet until we had open source compilers, very, very important that we had completely open tool chains. Now you can’t imagine paying for a compiler. It’s just not something you pay for, and that has democratized software. Many, many more of us are able to generate software because compilers are open. I believe the same thing is going to happen with FPGAs. FPGAs are now becoming open. The tool chains are becoming open, and now many, many, many more people can actually synthesize bitstreams, and this stuff is amazing. It’s not the solution to all problems, for certain, but if you have a problem that’s amenable to special purpose compute, and FPGA can provide you a quick and easy way there. So there’s lots of concrete examples of this. You definitely want to check out Yosys, OpenFPGA. There’s Open Source FPGA Foundation as well. That’s pretty interesting. Another thing that we’re very excited about are open HDLs, hardware description languages. Again, historically they’ve been proprietary, and the languages themselves have been a mess, but in recent years … This has happened in the last even 5 years, we have seen just an explosion of open hardware description languages. Name a couple here, Chisel, Spinal, Mamba, a couple of them. One of them that we’re really interested in at Oxide is Bluespec. The way one of our engineers describes it, “Bluespec is to SystemVerilog what Rust is to assembly. It is a much higher level way of thinking about the system, using types in the compiler to actually generate a reliable system, a verifiable system.” So super interesting stuff, and again, these things are all open. Every one of these technologies is open, and you can go check them out. Sometimes they can be a little hard to … Sometimes they’re sparsely documented, but certainly the case with Bluespec, but really interesting technologies that we believe are going to be transformative. EDA is also becoming open source. So the software that lays out a PCB, again historically completely proprietary, and we’ve been watching KiCad really closely. KiCad is an open source project that’s been around for many, many years, but as open source projects do, it’s been getting better and better and better and better, and we’ve been really excited for the release of KiCad 6. This is a release of the software that just came out a month or so ago, and we, at Oxide, are using KiCad for … certainly for all of our smaller boards, our prototype boards, but we see a future in which we can use KiCad for our bigger boards, and we can get rid of proprietary EDA software. And this proprietary EDA software has all of the problems that proprietary software has. We have, like many shops, we have lost time because a license server had crashed or a license server needed to be restarted. And if you are … don’t know what a license server is because you’ve grown up in the all-open world, good for you. It is a terrible artifact of proprietary software, and it … No one should be blocked from their work because a license server is down. This has happened to us plenty. It’s happened to big shops even more. KiCad is … The quality that we’re getting at KiCad now is really professional grade which allows us to actually iterate so much faster on hardware, to go from that initial conception to a manufacturer in as little as hours, and then that manufacturer can ship a board to you in days, and you go from something that existed in your head to a PCB in your hand in a week, in 10 days. It’s remarkable. It’s a whole new era. Now there are also supply chain shortages. So hopefully those are going to be temporary, but just to temper the enthusiasm a little bit there. But we are very bullish about KiCad. We’re very bullish about open source firmware. So that open source revolution has been so important all the way through the stack. Open source databases, obviously, with ScyllaDB and many, many others, open source databases, open source system software, open source operating systems, but the firmware itself has been resistant to it. The firmware itself has been proprietary, and as a result, again we’ve got all these proprietary software problems that we … They take a long time to develop. When they come out, they’re buggy. They’ve got security problems. We know, we, humanity, know that open source software is the way to deliver economic software, the way to deliver reliable software, secure software. That is the vector, and it … We need to get that all the way into firmware, and we are seeing it happen. It is happening. The boot ROMs remain proprietary, and this proprietary software is a problem, but we are getting there. We are, again, we are seeing it happen, and the future of open source firmware is here but definitely not evenly distributed yet, really looking forward to that one being much more evenly distributed. And then the last one is embedded Rust. So we are big believers in Rust at Oxide, and indeed it’s a bit of an organizing principle at Oxide. We don’t use Rust necessarily by fiat, but we have found that Rust is the right tool for many, many, many of our jobs. Speaking personally as a … someone who was a C programmer for 2-plus decades, Rust is emphatically the biggest revolution in system software since A. It is a very, very, very big deal. It is very hard to overstate how important Rust is for system software. It … And I’m shocked and delighted that a programming language is so revolutionary for us in system software. For so long, all we had really was C and then this offshoot in terms of C++ that had kind of … Well, we’ll leave C++ alone. Suffice it to say I was in a bad relationship with C++, but Rust solves so many of those problems and especially for this embedded use case. Where we talked about that ubiquitous compute earlier, Rust allows us to get into those really tiny spots, those tiny spaces. At Oxide, we’ve developed a new Rust-embedded operating system called Hubris. The debugger, appropriately enough, is called Humility and definitely encourage folks to check that out. So really, again, concrete example of what we’re talking about, a concrete example of hardware-software co-design, a concrete example of this embedded Rust use case. So none of these things is new. All of these things that we’re talking about. Just like revision control and the Internet and open source and Moore’s Law circa the 2000s, none of these things is new now. They’ve been around for a while. Sometimes they’ve been around for decades. But they are real, they are tangible, and they are becoming much more real and much more tangible. It … Each of these things, we believe, is at an inflection point, where they are about to go much, much broader. We believe that the future is one in which hardware and software are co-designed, and again, we are seeing that very concretely. And the fact that all of these technologies are open assures that they will survive. So we can quibble with the timing, but these technologies will endure. It may take time for them to broaden, but their trajectory seems likely, and we very much look forward to evenly distributing our present into the future. Thank you very much and enjoy the conference.