Nova Spivack: World Renowned; Pioneering Global Technology Visionary, Innovator, Strategist, Entrepreneur, Investor

Nova Spivack

Nova Spivack is a technology futurist, serial Internet entrepreneur, and one of the leading voices on the next-generation of search, social media, and the Web. He works as a producer of emerging technology ventures including, Live Matrix, Klout, Bottlenose, The Daily Dot, StreamGlider, and a stealth-mode new energy company.

In 1994 Nova co-founded one of the first Web startups, EarthWeb, which led to a record-breaking IPO in 1998, and a second IPO in 2007. Nova worked with Stanford Research International (SRI), to conceive and co-found their global business incubator nVention, and on the DARPA CALO program, the most ambitious artificial intelligence project in US history.

He is a frequent speaker and blogger, and has written guest-articles for TechCrunch, GigaOM, and SiliconAngle. Nova has authored more than 30 granted and pending patents. He earned his BA in Philosophy from Oberlin College in 1991, with a focus on the philosophy of mind and artificial intelligence. In 1992, he attended the International Space University, a NASA-funded graduate-level professional school for the space industry.

In 1999, he flew to the edge of space with the Russian Air Force and did zero-gravity flight training with the Russian Space Agency as one of the early pioneers of space tourism, which later led to his angel investment in Zero Gravity Corporation, which was acquired by Space Adventures.

He is chairman of The Earth Dashboard initiative, a new non-profit initiative to build a shared online dashboard to visualize the real-time state of the planet, and he serves on the board of directors and advisory boards of numerous startups.

Nova is the eldest grandson of the late management guru Peter Drucker.

He writes about the emerging edge of the Web via Twitter at @novaspivack and his site Additional details and history related to Nova Spivack can be found on his site, and his Wikipedia page.

More Details
Nova speaks internationally and advises governments and corporations on product strategy and the future of the Web. He has co-authored books on Internet strategy and collective intelligence, and has authored hundreds of articles about the Web.

During the late 1980's and early 1990's while in high-school and college, Nova worked as a software engineer and product marketer with artificial intelligence and supercomputing ventures including Xerox Kurzweil, Thinking Machines, and Individual Inc. He also participated in computer science research at MIT focused on cellular automata.

Nova has been a student of Tibetan Buddhism for over 20 years and has pursued this interest extensively in monasteries, refugee camps and communities in Nepal, India, Europe and the USA. He focuses his philanthropic activities on helping to fund the preservation of Tibet's unique wisdom culture.

While a student at Oberlin College, Nova did a winter term internship as a production assistant at Paramount Studios, working on Star Trek, The Next Generation.

In the mid 1990's Nova co-authored a series of patents for early Web-TV convergence for a product called HyperTV, owned by ACTV. The patents covered simulcasting URLs and metadata on the television vertical blanking interval (VBI) in order to display relevant Web pages next to live television content on suitably instrumented TVs and PCs. The patents were later sold to Disney.

Nova is also currently running a $10K challenge to create unblockable, anonymous, encrypted mobile internet access, in response to recent brutal crackdowns in Tibet, Myanmar and Iran where local governments were able to block, censor, and spy on Web access by their citizens.

Media & Press

Nova has been featured, cited, and has contributed guest articles in numerous media outlets such as: AdWeek, Atlantic Monthly, BusinessWeek, Business 2.0, The BBC, CBS Evening News, The Chronicle of Philanthropy, CNBC, CNET, CNN, Der Spiegel, the Discovery Channel, Download Squad, the Economist, Entrepreneur, the Financial Times, Gartner, GigaOm, the Guardian, Guidewire, Industry Standard, Infoworld, Information Week, Interactive Age, International Herald Tribune, the L.A. Times, Mashable, the MIT Technology Review, the New Scientist, Newsweek, the New York Times, NPR, the Observer UK, PC Magazine, PC World, ReadWriteWeb, Red Herring, Reuters, the San Francisco Chronicle, the San Jose Mercury News, SiliconAngle, TechCrunch, the Times Online, Venturebeat, Wall Street Journal, Washington Post, WIRED and ZDNet

Nova has authored hundreds of articles and co-authored several books on Internet strategy and technology, and led the EarthWeb Press publishing imprint with Macmillan Computer Publishing.

Invited Talks

Nova gave over 30 talks in 2009 – 2010, to both technical and business audiences. For videos of some of these talks please click here.

He has spoken, moderated, and served as a judge at numerous conferences and industry events including: BlogTalk, Defrag, DEMO, DigitalNow, the Financial Times Digital Media Conference, the Future in Review, GigaOm's Bunker sessions, the Highlands Forum, Internet World, Internet Expo, the International Semantic Web Conference, the Island Forum, the Kleiner Perkins CIO Strategy Exchange, MIT's Emtech, NextWeb, NewTeeVee, SDForum, the Semantic Technology Conference, SIBOS, the Singularity Summit, Search Engine Strategies, Stanford/MIT's VLAB, Supernova, SXSW, TTI Vanguard, and The Web 2.0 Summit.

Nova has also given guest lectures and keynotes for the MBA programs at Harvard University, Stanford University and Berkeley, as well as to several business schools in Europe. In addition, Nova has advised governments, defense and intelligence agencies in the United States as well as Asia on the near-term and long-term future of the Web.

The latest blog on the interview can be found in the IT Managers Connection (IMC) forum where you can provide your comments in an interactive dialogue.


Interview Time Index (MM:SS) and Topic


Stephen Ibaraki: Welcome today to our interview series with outstanding professionals. We are conducting an exclusive interview with Nova Spivack: celebrated, world-renowned pioneering global technology visionary, innovator, strategist, entrepreneur and investor. Nova, you have an incredible history of notable historical distinction, significant outstanding contributions in so many fields, as a globally top ranking authority in technology, innovation, strategy, entrepreneurship and investment. Thank you for sharing your considerable expertise deep accumulated insights and wisdom with our audience.

Nova Spivack: You're welcome. I'm happy to be here.


Stephen Ibaraki: So Nova, as I mentioned, you are really a historical figure, and you continue to make significant contributions to the world. Can you highlight what you consider to be your top four contributions and their lasting historical significance?

Nova Spivack: Well, I think one of the first examples would be the early days of the World Wide Web in 1993 and 1994. I helped start one of the first web companies in the world called EarthWeb. So that was the first. I really helped to pioneer the early days of the Internet and the Web, particularly understanding business models and how to do online commerce. Secondly, through that I helped launch and evangelize Java technology, working on a number of projects with Sun Microsystems. One was called Gamelan, which was a leading open site about Java for developers and [we] started the World Java Developers Alliance, which really helped to bring that new model of software to the world. And then in the late 90s – early 2000s I worked on the early days of Semantic Web, pioneering the concept of smart data, helping to evangelize that through What I'm doing now is a new model for innovation, which I think of as a production studio, like a film studio, but for creating technology ventures. I call it "Venture Production Studio". And right now I have 7 different new startup ventures, working on emerging technology frontiers, including the next generation of the Real Time Web; I'm also working on Wireless Power, and many other really interesting projects.


Stephen Ibaraki: It's interesting you mentioned that you worked with Java early on at Sun. Did you also work with Eric Schmidt at that time?

Nova Spivack: Well, I certainly knew Eric, and we did interact at the time he was CTO over there. We had a lot of conversations and meetings, and coordinated very closely with their people.


Stephen Ibaraki: Can you profile (and you've mentioned this already in terms of some of your venture work) your current and future areas of focus and why they resonate with you?

Nova Spivack: I think there have been some themes in my work. One of the big themes has been trying to use massive amounts of data to create smarter applications and services. So [it's] data mining, making sense of trends and patterns in large amounts of data to do personalization, targeting, or personal assistants. I worked on a project with SRI and DARPA in the United States (which is the Defense Department), with their Advanced Innovation group on intelligent assistants, called CALO, which mined a lot of data from your email and other work that you did to try to assist you. This theme has been in my work consistently, and it's continued today with the work we're doing on a project called BottleNose, where we're mining Twitter and the Social Web to develop personalization, to help users make more sense of what's going on in the Real Time stream. We filter information for them. That's the big theme.
Another theme has been entrepreneurial endeavors, and really helping to start ventures and figuring out ways to get early stage ideas to market inexpensively and quickly. And I think the new work that I'm doing now with the Venture Production Studio really takes that to the next level. Previously, I worked with one company at a time, and now I have seven that I’m working on. That's kind of interesting, and we continue developing these ideas further. In the future areas that I'm interested in, [we] continue this trend, but really focus a lot on intelligent assistants. I really believe that's going to be a very important piece of our lives in the future - that we'll have this automated intelligent assistants in different areas, whether we're driving, or walking on a street or seeking advice on a medical issue, or being a tourist – any of these scenarios. There are lots of opportunities there. Another area is the augmented reality, where I think in partnership with intelligent assistants we'll be annotating and extending our experience of the physical world as we move around. Then something I'm quite interested in, and I think it's very important, are new energy technologies, new ways of generating energy, and new ways of distributing energy that can free us from fossil fuel, and can also make the use of existing energy far more efficient. I think that's necessary for the next generation of technology. And one of the reasons I'm working on the wireless power is because I think mobility is so important, and we've got to find a way to give power to mobile devices without wires and without limitations of today's batteries.


Stephen Ibaraki: Wow, that's a really amazing array of projects you're working on with your ventures and themes. I guess then from your current role there must be some top challenges and top opportunities within all of these different ventures that you're working on.

Nova Spivack: Yeah, I think the challenges when you're doing this kind of work are getting funding of course, so when you're working with early stage projects that use a lot of innovation, it takes a very particular kind of investor with forward vision, and finding people who understand what you're doing and are interested in taking the kind of risks that we take, on the chance that there might be a big breakthrough, so that's one thing. Finding [good] investors is always a challenge; we've been lucky to find a lot of them.
I think the next big challenge is talent, is attracting entrepreneurs to be involved in these companies and, in some cases, to lead these companies as the CEO, or in other cases to be in the team. That's of course very important with my project. And so my model has really been my production studio: start and incubate the companies, fill a number of the key management roles in the beginning, gradually find the right people and put them into key roles over time, and then the company leaves the nest; they [become] able to go on their own. So in several of these companies now we have terrific CEOs, who have tremendous industry backgrounds in their respective fields, so the model's working, but it's very important, I think, as it grows, to create a system where we're constantly meeting and recruiting really interesting talents, we're building a network of funding partners.


Stephen Ibaraki: What kinds of lessons shaped your life and you think would be useful to the audience?

Nova Spivack: Well, thinking about that question, I think there are four big lessons that kind of jump out. One is growing up with my grandfather, Peter Drucker, who is a well-known management theorist. He was a huge influence on me as a child and certainly through my college years. He passed away not too long ago, so I knew him even [when] I was an adult. He had a huge influence on how I look at problems, how I solve problems, how I think about the world; my understanding of systems. And I think on the personal level, he gave me a lot of guidance for my own career direction, so I had a benefit of having him as sort of my personal consultant at the time, which I think was a rare privilege, really. So he was a huge influence, and I can speak at length about that.
Another big life-changing event for me was in 1992, when I attended the International Space University, which is an international program to train business and technical leaders of the Space industry. It took place in Japan in summer; it was a graduate program. After that however, what was the most important to me was that I took the rest of the year and just travelled alone through Asia. I spent a year just traveling through Asia on my own, just exploring and going wherever I wanted to go. ended up primarily in Nepal and India, where I had amazing experiences that have really, really changed my life in a lot of different ways.
When I came back from that trip [it was] when I started EarthWeb. And that was another huge experience for me. That was my first that I have co-founded; I previously worked with ventures, but that was the first I started. And it was an amazing experience, because we happened to be right there at the beginning of the Web, and managed to go all the way through to [obtain] an IPO. So it was a very exciting and formidable experience for me as an entrepreneur.
And more recently, a very big experience was the death of my father, which just happened few months ago. It was also an incredible learning experience, a big experience, and anybody knows who's lost a parent knows it changes your life in so many ways, and it was something I'm still learning from."


Stephen Ibaraki: Now, can you make your predictions of the future, their implications, how we can best prepare?

Nova Spivack: Yeah, sure. What I'm thinking about right now is, on a technology front, how life is going to change if some of the trends I'm seeing continue. In particular, one theme that I'm thinking about a lot is what happens as we augment reality and if that trend continues?
Right now very few people have even experienced what I'm talking about. But if you have a smartphone, like an iPhone for example, you may be able to try an application which will give you augmented reality; they will show you on your video camera information about the things you're looking at. So if you take your camera and you look at a building, it will tell you the name of the building, for example. If you're walking around, it will give you the names of the streets as you look at them. And that can go a lot further. You can see the names of the people as you talk to them. You can look at products and get information about those products when you're shopping at a store.
There are a lot of different possibilities with this. What's interesting is if it goes further, what happens? Now what do I mean? Well imagine in the future that this pervades everything in our lives. Because whether we're using a phone, or it's just consumer devices, they have little displays on them or they can talk to you or even possibly, wearing eyeglasses or contact lenses that have little displays embedded into them or brainmind interfaces, where we can have direct connection with our brain to get information. However this happens, eventually it will. We will be able to get information off of everything. Everything we're looking at, everything we're doing.
So what's most important, and I think the challenging question that comes from all of this, is there will come a point when we have so much information about any decision that we're going to make and information of what other people have done in the same situation, what decision worked better or didn't work. As we do things, we'll have so much information influencing that decision that the decision won't be even our decision anymore. There will come a point where the question will be: "who's actually making the decision?" Is it you, or is it the collective global mind of all the software and data and machines that are all connected, and giving you advice at the moment that you do anything. So gradually, as we get this kind of advice from the machine we trusted, we started to take that advice, and then when that happens, the other question is "Who is actually the doer? Who is making the decision? Is there free will? Is it all the humanity; is it the global mind that's making the decision? Is it you as an individual making the decision? WHAT IS AN INDIVIDUAL?" So I think we're going to get to this place, not too long in the future. Maybe 50 years, maybe less, where the question of what an individual is and what free will is will be something that becomes a very real question for all of us.


Stephen Ibaraki: How does that tie-in with this concept of singularity or with Kurzweil's concept of singularity, or does it?

Nova Spivack: I think it's actually very related. You know, Kurzweil's concept of singularity is that there shall be a point in time when computing power will exceed our ability to understand what's going on. And basically, we won't be able to know what happens after that point, because computers will be able to invent things that we can't imagine; they'll be able to do things that we can't understand. Basically, we can't see past that moment when computers get that powerful.
I think this is a similar concept, you can think of it as he's talking about technological singularity, and I suppose, I'm talking about a psychological, or spiritual singularity. The question is, when the computers do get that powerful, and they begin to give us advice and augment everything that we do, what happens to us? We won't be able to understand ourselves either. Because so much of what we do will actually be dependent on information and advice we're getting from computers.


Stephen Ibaraki: Wow, that's really fascinating. I can see the implications, and it's good to get this into the broader conversation as you were doing. Now, the next thing series of questions comes from Alex Lin, who is the founder and CEO of China Value. And I'm going to be quoting him through the next series of questions.


Alex Lin: Thank you for accepting this interview with China Value. Your contributions into semantic web and global brain will change the world. Please explain your ideas around the concept of the global brain, a term coined by Howard Bloom. And can you quickly profile and comment on critical ideas in this field from a few others that you feel are noteworthy?

Nova Spivack: Thanks. I think the global brain is a very important idea. And actually it's been around for a lot longer than most people would think. Probably one of the earliest references to it started with H.G. Wells, who is a noted science fiction author. He had an idea which he called World Encyclopedia. He predicted that one day in the future there will be a global encyclopedia that everybody will have equal access to. Well of course today we have that: it's the Web. And even within it, there are things like Wikipedia, which literally is [a] Global Encyclopedia. But his concept was that the World Encyclopedia would be the beginning of a kind of global shared memory. So global brain does have to have a shared memory.
Teilhard de Chardin was a French priest. He wrote a number of books that were quite important about this concept of the Noosphere and also the pointless notion of the Omega Point. He basically believed that there was a realm of ideas, the environment of ideas and intelligence, and that we were heading towards a point in time when the whole universe would wake up and become conscious and aware. Again, another important theme that many people are thinking about the global brain, although Teilhard de Chardin took it to spiritual point. I think the general takeaway that most people got from that was really this idea that the planet could wake up. And you've heard of all those questions, like the Gaia hypothesis and how some people have taken literally that the planet might have a soul or might wake up. In the case of the global brain as a theme, we think about it, that it's possible that the mind of humanity, the collective mind of humanity would wake up and think. And in many science fiction movies, for example the Terminator, we've seen very negative portrayals of what that might be like if the mind of the network woke up and then decided it didn't want humans around.
But a more positive view of that was presented by someone named Gregory Stock, who wrote a terrific book called Metaman. And in that book he talked about how there are many systems today: economic systems, medical systems, agricultural systems and manufacturing systems around the world that really cannot function without the rest of the global infrastructure supporting them. For example, today you cannot run an economy in isolation. Every economy is tied to every other economy. And you see the ripple effect moving around the world. You can't separate or isolate anymore. Similarly even a farm today: it's very difficult for it to function without supplies and interactions with the external world; it's not an isolated farm anymore. Also, a modern day hospital cannot function on its own: it needs electricity of course, but it also needs supplies and expertise, interactions with laboratories. It's no longer a central system; it becomes a distributed medical service. So you can look at just at about anything today and find that it's connected to everything else. So Metaman was a book that explored that with great detail in a technical, economic kind of way.
Francis Heylighen, another very important researcher, spent many years on a big project online called Principia Cybernetica, which is basically a set of algorithms and essays about collective intelligence and how to help large collective intelligences learn on the Web. Of course my friend Howard Bloom whom you've mentioned, has written a number of wonderful books about collective intelligence and the global brain, and his primary contribution has been to remind us that the global brain is already here, and has been here since the beginning of time. And it's actually operating on an atomic scale so that the Universe itself is intelligent in ways we're just beginning to understand; there's been creativity happening throughout the ages. Today we humans think we’ve invented that, but in fact, we're just the latest technology.
Beyond him, of course, Ray Kurzweil and his concept of singularity, an extremely important thinker. One of the best things about Kurzweil's work is that's he's really quantified and calculated a number of important trends in, for example, the rate of growth of computations and so forth, which helped to establish some basis about concrete predictions about when certain technological changes are going to happen in the future. And his concept of singularity which you mentioned earlier is a very important idea today; it's debated whether there will be singularity, when will the singularity happen, and what will it be like. And then finally, I would add Kevin Kelly, the former editor of Wired magazine. Kevin has been working writing books and papers about his views on collective intelligence. He thinks of the Web, he calls it "One machine, the most powerful machine ever built. It's global, it uses 5% of the world's electricity, it never breaks or fails, it never stops; you can't turn it off. It's arguably the greatest thing we've ever made." So he talks about the Web and its collective intelligence, and where that's going. I think he's doing some wonderful work on that.


Stephen Ibaraki: Those are some very interesting ideas and people you brought forward. I'm just thinking in 2012, they are saying that there is going to be a solar max, so what impact will that have on the global brain that is really dependent on computational power, and then there's this other concept, or Cogito Ergo Sum (I think, therefore I am).

Nova Spivack: Well, yeah, two things there. So first obviously, is the solar max, and we do have terrible solar storms that some people have predicted. That will be I think something catastrophic, but we'll survive it. Even if major pieces of our infrastructure get fried from electrical storms, we'll be able to recover from that. So I think it may be problematic, it may be very disruptive, but I don't think it will be the end of the world.
As far as "I think, therefore I am", I actually as a Buddhist look at it the other way, I think it's "I am, therefore I think", which is I think a more Buddhist view on that. So I guess the question you are getting at, and the question that I think about a lot, is actually a question of whether or not the collective global brain that we're creating will become conscious, and what does that mean. Will it be conscious like a human is, or in a different way? What is consciousness? That's a very important question. And that question for me is the reason I'm interested in Buddhism. I got interested in Buddhism because I was interested in consciousness. I'm sure I'll talk about that later in this interview.


Stephen Ibaraki: I guess this ties into Penrose's work in this area about consciousness.


Alex Lin: Now, what do you think about the argument that there will be an emerging global brain, and we've talked about this already, the worldwide mind? What are your views on the emergence of a global group of simply connected entities of independent people?

Nova Spivack: Well, I do think the worldwide mind is already here, it's existed for ages. It started with the beginning of oral traditions, when people would communicate stories that would travel around the world. Some of those stories, like Aesop's fables, still exist even today, and are still in the background of our childhood and have shaped some of our early development. Later with written language first, then with the printing press we found new ways to communicate our ideas non-locally, so we can distribute them over greater distances, both in space and in time. So stone tablets lasted a really long time. Printed books don't last as long, but they are easier to make. And so we have a lot of records, ancient records as well as current records that are spreading over the world. With personal computers and computer networks we've created a more efficient way to distribute our ideas more rapidly, more widely and to replicate them even less expensively.
And so you can look at all those different technologies, adding up to really improving the efficiency with which humans distribute knowledge. And beyond that actually, [we] begin to automate intelligence. Knowledge and intelligence are two different things. You can say knowledge is the data, and intelligence is the program, the process. It's what you do with the data. And we're actually distributing both of those functions. It used to be that you could distribute only the knowledge (the books), but the intelligence lived in people's heads. So you had to have a person in a certain place to do something with the book. The next step is that intelligence will also be distributed. We're seeing that happen with the beginning of intelligent applications of semantic web and so forth, where we're beginning to start taking intelligence and making something we can distribute and replicate by moving it out of the specific software or specific people's heads. We're actually beginning to create expert systems, or smart intelligent assistant applications, that can learn, can understand, can evolve, can grow, and are also independent of the specific location: they are software; they can move around, they can be copied.
So the next big step we're doing for intelligence is what we're doing for knowledge. As that happens, then I really think we can start to say that intelligence lives on its own, outside of us in the Web. It is there in very primitive ways today, but I think we're going to see big advances in the next hundred years. And Kevin Kelly has been writing a little bit about this idea as well. He’s actually calling for search of intelligence on the web. Basically, it's like the search for extraterrestrial intelligence, but what he's asking us to do is to search for intelligence on the Web. And how would we find it? How would we detect it? How would we know it was there if we found it? I think it's an important hypothesis. It's more likely that we'll find a non-human intelligence living in the Web, than it is that we'll find it in outer space, at least it might happen sooner. And I think it's a very interesting frontier to explore.


Stephen Ibaraki: It sounds like something that you can get Paul Allan to fund as he was funding the SETI project.

Nova Spivack: Effectively, that is what he's doing. He has a number of projects that are working on the frontiers of artificial intelligence today.


Alex Lin: Now can you explain your key points about WebOS and the software-understanding based web (that is machine-understanding based web). Do you have any new updates since you published your map of web evolution?

Nova Spivack: I think WebOS, which is Web Operating System, or Web as an operating system, is something that is happening. There used to be a battle between operating systems, say Microsoft Windows and Mac and UNIX, Linux. Today the battle has shifted to whether it is going to be Google, Amazon or Microsoft Cloud, or Apple for that matter. Which cloud is your application going to live on? Where are you going to get the storage, what computing resources does your application draw on? Operating systems provide an interface within an application and a computer. Today that operating system is no longer on the desktop; even though we have operating systems, the important operating system is out on the web. So the question is when you write the software, are you going to write it to Amazon, to Apple's cloud (which is coming), or to Microsoft's cloud or some other cloud? Where does the back of your application actually live? So I don't think it's finished yet, we don't know who's won that. Right now there's fierce competition in that space – just like there was for desktop operating systems in the 90s.
And I think where it's going in terms of machines understanding, that's really the semantic web. That's the piece of the operating system. So within the web operating system one of the things it has to do is to handle data. Just like we had file systems on our computers, we need file systems on a cloud. And what is that file system going to be? It can be a simple file system, or it can be a database (we have those on a cloud; there are different kinds of databases you can use that live on the cloud). But the next step is to go beyond old-fashioned databases and start to use the semantic way of storing, connecting and retrieving information. That's really when the semantic web comes in, and the big idea there is to mark up information in a way that makes it understandable to any application that sees it. So the application doesn't have to know anything in advance. It doesn't have to be explicitly programmed to understand in advance some set of information. When it encounters that information, the information carries other metadata, which teaches the application what this information is for and how to use it. And so there will come a time perhaps, when applications will be able to understand any dataset. You can make a universal application that can find data about medicine, or data about cars, or data about shopping, or data about travel. And they would know what to do with that and how to help you intelligently query and make sense of that data, even though it hasn't been programmed in advance to do that. And that all will be possible because of the semantic web or the metadata is able to describe, in a way that the machine or software can understand, how to use that data.
So that's an important piece of the WebOS. If it develops, it will enable much more universal, powerful, more intelligent applications. The question is not really "if", it's "when" and "how" this will happen. The original proposal came from the World Wide Web standards organization, the W3C. And Tim Berners-Lee and other people there proposed a set of open standards for the semantic web, including some special languages, called RDF and OWL, which allow applications to make sense of data and reason on this data in a new way that doesn't require them to know in advance what the data is. Those standards have not been widely adopted, unfortunately. They were actually good proposals. But if you look at where technologies are going today, it's not doing that [in a sense that] big companies are trying to do what's simplest and cheapest right now, even if it means it won't be as universal, even if it means it will be more difficult. And what it really gets at is a tradeoff between throwing a lot of computing power at problems, and not necessarily doing things in the most efficient way, but using a tremendous amount of computing power, or trying to do things in a more elegant, efficient way, so that you don't have to use as much computing power.
And what's interesting (it comes back to Kurzweil), is that it's getting so cheap to throw so much computing power at problems that in fact, you might as well do it in an inefficient way. It's simple, it's cheap, you can do it. So that's what a lot of companies such as Google and others are doing, rather than adopting and using new difficult but powerful standards like the semantic web. They're just using brute force computing to try to figure out what data means. It's not nearly as efficient; it takes a lot of computations, but so what? Computing is so cheap.
This is something that Tim Berners-Lee and the people who invented the semantic web hadn't planned on. Moore's law basically shows that the price of computing will continue to get less and less expensive, as power continues to go up. And so if that's the case, maybe with brute force computing we will be able to do things as or even more powerful than what the semantic web standards were designed to enable. So either way, I think we will get to the WebOS eventually that includes a layer that does very smart things with data, and it will understand what the data means and will be able to make connections for you. It will function in a way more like our own minds function. You know, when we learn something, we don't think about it, it just sticks in our memory, it's there. We connect to other things relevant, so we can use that knowledge in the future as we use other things. It's rather difficult to do that across applications that we have today. But I think it's going to get easier. And whether it's [done] through the standards of the semantic web or whether it's just brute force computing, I think we'll get there. And if you believe Ray Kurzweil, that will happen within a few decades. If you are a little more skeptical (which I happen to be), I think it may take a little bit longer than that. But I think one way or the other, it will happen.


Stephen Ibaraki: I guess there have been enormous strides made in what you call machines learning, it's used widely today (although it's more the brute force method).


Alex Lin: Now, what do you mean by the singularity in 2029 (when the human brain equals $1)?

Nova Spivack: That's really a Kurzweil idea. Over the years there had been different dates predicted for when the singularity will happen. One set of predictions shows that in 2029 the computing power, equivalent to the computing power of a single human brain, will cost about one dollar. For a dollar, you could have as much computations as the human brain does. If that's true (that's amazing), it could mean for example, that you could be wearing a wristwatch that was as computationally powerful as the human brain. You could have a human brain in every software application. You could have human brains effectively out on the Web.
Now, here's where it gets a little tricky. First of all, how much computation does the human brain really do? Well this figure, which Ray Kurzweil and his people came up with, is based on some assumptions about the level of computations which happen in the human brain. Basically, they are assuming that these computations happen only at the level of neurons. But in fact, that may not be true. There is research that shows that computations happen at many deeper levels, smaller than neurons in the brain – at the chemical level for example; at the level of interfaces between connections between neurons. In fact, there's even evidence that it happens at a quantum level in structures called microtubules, which are very, very small (within the structure of the brain). These are many orders of magnitude smaller than neurons. And if that's true, then there's actually a lot more computations happening in the brain than Kurzweil thought when he made that computation. And what that means is in fact, it may take a lot longer for the actual amount of computations happening in the human brain to cost only a dollar.
But let's forget about the question of how long it takes for a minute. Will it ever happen? If we had an infinite amount of time, will it ever happen? That's an interesting question. Could we ever do as much computation as the human brain does? That's a pretty deep question because we don't really know today whether the human brain is actually separate from the rest of the universe. It gets to that question where are we really isolated, or are we all part of the whole? If in fact the computation of human brain does, at such a deep quantum level, somehow connects to the very fabric of reality then it would be very difficult to separate that from computations about space and time, computations of the universe itself. On some level, technology is going to take us back to the same set of questions that Taoism and Buddhism (and Hinduism) have taken us as well. At the end of the day, the questions are: "Who am I?" and "How am I connected to the universe?", "What is all this?", "How does it work?", "What's the connection between self and other?"
We will get back to that question through technology eventually. Because when we start looking at this question, "where is the computation happening - is it in my brain, is it the universe, what's doing the computation?" Ultimately we may find that every brain is just a piece of a much larger computer, the universe. And you can't separate those two things. You can't separate the brain from the universe; it's one system.
If that's the case, then it will never be possible to put the amount of computation that is happening into anything, because effectively there's an infinite amount of computation happening, and certainly we're not going to get an infinite amount of computation for a dollar anytime soon. So it's a deep philosophical question, but on a technical level, whether it's all the computation of the human brain or just a huge amount of computation, I think it will happen within a few decades. We'll get to a very huge amount of computation very inexpensively. That's going to change our lives, it will change the world. There are many things today which we are not doing with computers right now, because it's too expensive to do the computation. But when the price of computation gets that much cheaper, and it's that much smaller, or it's accessible over the web, we'll see very powerful computation showing up on devices we don't have today. For example in phones: being able to listen and automatically annotate a phone call, take really good notes, make connections, and perhaps on a little display, while you're talking to somebody make suggestions or show reminders or links to related files. You can do amazing things just by analyzing the content of a phone call, augmenting it during the call. You could do the same things with video conferencing; you could do the same thing for walking down the street or driving a car. All of these just require a lot of computation, and the only reason why that stuff is not happening right now is because the computation is still is too expensive.


Stephen Ibaraki: I guess then this extends into quantum computing, twin particle effect, and all the possibilities that can come from that.


Alex Lin: Now the next question is about the global brain, not just being related to information technology, but also philosophy, sociology, economics, politics and cultural anthropology and so on. We can even say it's related to the next stage in civilization: the question "Who are you?" is based on our evolutionary definition of the concept of human. Can you give us your idea of how matter becomes imagination, this idea in a book by a Nobel Prize winner, Gerald Edelman?

Nova Spivack: I've been speaking about this already in this discussion today a little bit. So the global brain touches on a lot of these deep issues, and it's really just an idea at this point. Just like the notion of self is an idea, we kind of walk around without really thinking about that very much, but we have this convenient notion of self, or identity: "who I am", or "my personality". We can even say things like "this is my body, this is my head; this is my thought". Well who is saying that? That is the question. Who is the one who owns or thinks that it owns this body? We kind of naively assume that there really is somebody there, the owner. And we think that's me, or us, whoever we think we are. That's an assumption most people don't question. When you question it, that's the beginning of the spiritual path. It's also I think, an interesting question for the Internet and the Web: we can look at this question for an individual as well as for the global brain as a whole.
When we talk about the global brain, an analogous thing might arise: if there ever is a global brain, will it have a self? Will it have one self, or many? Will it be an actual real thing that we can point to, or will it just be some data somewhere, that's just a bunch of labels but not really an actual self? So what is self? Same question, just a different level of scale.
The brain, if you actually open it up and look inside it, is just a collection of different parts, billions of structures. But there is no one thing you can point to that's a self. And with the global brain, it's the same thing. If you look at the global brain, you see lots of people, lots of computers, lots of software, lots of data, lots of infrastructure. All of these are separate parts; nowhere can you find "the brain", or even "the self". So this question applies equally at both levels of scale.
It's also interesting that if there was something we can call "self", that was kind of a special thing, that could only be created by God, then that would mean that it will be unlikely, or who knows, whether or not that will happen for the global brain because we wouldn't know how to create that special thing. But, the good news is in a way, from all the evidence that we've seen, we cannot find anything that's an actual self. Self is basically some kind of concept, or an illusion that we maintain, because it helps us think, but it's not really there. Because of that in fact, it means that it's possible that there could be a self for the global brain, that it could emerge, that it could be constructed, because it is just a construct. So the bad news is that we can't find the self. The good news is that it means that we can create self at other levels of scale.
It means you can have an artificial intelligence that had a concept of self. Someday maybe you could have a disembodied concept of self, or structure that functions like a self for the global brain. It's certainly possible. Now, whether it will happen? Who knows. A lot of things will need to take place and maybe it's unlikely, but it's possible. So this concept of a human, and what it means to be alive, what it means to be a being, as we explore the frontiers of both neuroscience and computer science is shifting. And in a way, this is kind of similar to what happened to our understanding of matter, as we developed the frontiers of physics, biology, and chemistry in the previous centuries. It's analogous.
So we're effectively doing for a new frontier – the frontier of mind – what we’ve already done to the frontier of matter. And what it is we will discover, is just like we eventually found out how matter decomposes infinitely down to the quantum stuff we’re going to find the same thing is true with the mind: you never find an end, you get down to the quantum stuff. And interestingly, quantum mechanics itself has already established that there is some strange relationship between matter and consciousness. We don't really understand it, but we can show that observation does affect the outcome of an experiment. In fact it's so strange that there are some experiments that actually prove that it even travels through time so it actually can influence things backwards or forward in time. It's very strange, nobody really understands that today. But we know from numerous different directions of inquiry, that there's a relationship between what we call mind and matter.
And if we jump over to the other side, to the spiritual way of looking at things, all the great spiritual traditions have already discovered that a long time ago. From their perspective, it's all one thing. Mind and matter are dualistic concepts. They're artificial distinctions. There's really not any separation between all these different phenomena that we've observed. They're all a manifestation of something deeper. So I think technology and spirituality are kind of engaged in this sort of dance. This has been going on for centuries: the church and religion have been deeply involved in science. Many great scientists have been deeply religious, because when you ask these big questions, you inevitably start to experience this kind of wondering on mystery, and you even start to realize that there are things that we'll never understand, or they can't be understood, because that's what they are. They're not understandable.
So I think when we talk about the question how matter becomes imagination, or how matter becomes consciousness there's a lot of thinking going on about this right now, trying to find the source of consciousness. The approach taken in the book you're referring to is still a very mechanistic, materialistic approach, trying to find some physical thing in the brain that corresponds to the consciousness – whether it's a process, or pattern, or a particular way neurons fire are the certain conditions that's the source of consciousness. I don't think that will succeed. I think those approaches may find some analogues, some things correlated in the brain with certain experiences that we have. So when we see something that we recognize, there's a neural signature, and you can detect that. That's the basis of next generation lie detectors, which can actually detect whether you've seen or remember something by looking at the neural fingerprint of that experience in your brain.
So we'll be able to see some correlations between sensory experiences and mental experiences and the brain. But that doesn't actually locate the source of consciousness. That locates perhaps, the source of conception, where the concept is and how they [neurons] are firing at it. When we talk about consciousness, there's a very specific distinction that we have to make, and that is what do we really mean by consciousness? Do we mean an entire landscape of thought, or do we mean something more precise? That is the entity that's actually aware, or witnessing of what is taking place. These are two very different phenomena, and in the West when we talk about consciousness, we don't make that distinction; we're very messy when we talk about this. In Eastern philosophy, they're very precise about this. In Buddhism for example, there are very, very precise distinctions for all the different phenomena taking place within the field of consciousness. When you experience something, there are many different things going on in that experience, and there are labels and names and technical descriptions in all of these. That's still very lacking in the Western cognitive science and neuroscience. We have a very simple, primitive language we barely understand when we talk about what's going on in consciousness.
In the East, in Eastern philosophical traditions, they're much more sophisticated. They've had thousands of years of dialectical debate and research, and they've developed very sophisticated logic and very precise analytical method and language for explaining what's going on in consciousness. So, the conclusion is, we may find some physical analogues for thought, for experiences, but we won't be able to define the thing that's knowing. That's different. That's something else, and it doesn't have a neural signature. We won't be able to find it, and that's the statement that I'm confident making, because there are logical arguments as well as experiments that you can do in your own meditation for example, where you can see that and establish that, and you know it's true. And it's not a matter of belief; it's more like existence truth a mathematical truth. It's something you can find, it's something you can show and other people can repeat it and it's not even debatable once you actually see that.
Of course, that's something that scientists would probably scoff at and say, "Oh please, that's just another statement by a spiritual person who is not a scientist". Well, I happen to be both. I'm spiritual, but I also happen to be a scientist. And I can say having looked at both sides of a coin, that there really is something special taking place when it comes to awareness, to the source of knowing. Phenomenon that we know what appears to us, the senses are not as important or interesting. They're special too, but the really interesting question is, "What is knowing those things?" What is that? Actual awareness is the very root of the question. So I think through science we will never be able to answer that question.


Stephen Ibaraki: Fascinating, all of these ideas, that you're putting out and gives much pause for everybody to contemplate. I guess it goes into this next question from Alex.


Alex Lin: I think the evolution of the Internet starts with the evolution of philosophical thinking. The founder of LinkedIn is a philosopher and you're a philosopher, as the founder of Radar networks and as the philosopher Nelson Goodman said, "The world is made rather than the world is discovered." For me my ideas are influenced by Popper and Kuhn, László. Can you share who influenced you and tell us something about your philosophy?

Nova Spivack: I think when I was a student I studied Western philosophy first, starting with Plato and working my way forward. I went through all the classic and standard philosophers that one’s supposed to study. I also studied science, history of science and the philosophy of science, and I was very interested in Thomas Kuhn of course. But [I] also was increasingly interested in works of people like John Searle. John Searle is actually a famous, very relevant example for this discussion, which is actually called The Chinese Room, the famous Chinese room thought experiment. The purpose of it was to question what it is to actually know the meaning of something.
And here's how the experiment works: you've got a man, and you imprison him in some room somewhere. He does not speak Chinese; let's say he only speaks English. And then what you do is you hand him a question. It's written in Chinese. You slide it under the door on a piece of paper. It's written in Chinese. Since he can't read Chinese, he doesn't know what that thing means. But in the room there is a big book, basically a set of instructions. And it says if you see this character, then write this character. If you see this group of characters, write that group of characters. It's a set of instructions. And by following that set of instructions, he generates a response, which he writes on a piece of paper. Basically, he gets the input in Chinese, he gets a set of instructions, he follows the instructions, he writes the response and he slides it back out under the door. On the other side, you get that response, and it happens to be a perfect answer to your question.
Now here's the thing: if you don't know what’s going on inside that room you would think, "Well this guy knows Chinese, he understood my question, and he answered it. He must have known the meaning, so he understands Chinese." But in fact, if you can see inside the room, if you know what's going on in there, then you'd say, "No, he doesn't understand Chinese at all, he's just following the instructions." So this is a metaphor for a deep question about what is going on with computers, and artificial intelligence and the human brain. What does it mean to really know something, to really experience the meaning of something?
John Searle called that the qualia of an experience. For example if you see the color red, there's something about that – the qualia – of the experience of seeing the color red. We just can't really put it into words; it's the redness of the red. It's the experience of really seeing that. The question is, is that the same as following the instructions? For example, when this guy in the Chinese room is answering this question in Chinese, does he actually understand it, does he experience the meaning of the question, the meaning of the answer? Does he get the qualia, if you will, of what's being discussed? No. Of course not. He has no understanding of what’s going on, he's just followed the instructions. That's different from someone who really does speak and understand Chinese. When they get the question, they actually know something, they are experiencing something, and then they are generating something. That's different from just following instructions.
Another good example is chocolate. If somebody described what it's like to taste chocolate to a person who has never ate a chocolate, they are not really going to understand. Because you cannot know chocolate until you taste it yourself. There's no way to convey or to describe the taste of fine wine. There's no way you can explain that experience to somebody, because they have a qualia. So John Searle's notion of qualia is very influential to me. I think it tied very nicely to many of my questions when I was in college studying the philosophy of mind about the nature of consciousness and what it means to know something. And I was primarily interested in this, because I was trying to create artificial intelligence; that was my goal while I was in college. I wanted to create an artificial intelligence. I did a lot of computer science research and experiments and building software, searching for this.
Along the way, another very important influence was an American scientist, named Edward Fredkin. He's not very well known, but he is one of the founding fathers of the field called Digital Physics. (Of course he wasn't the only one in the field, I should mention, there was also Stephen Wolfram, Tommaso Toffoli and Norman Margolus from MIT who I've gotten to do some research with). His basic idea was that the universe is one giant computer. And it's basically doing massive universal scale computation and we are effectively just computer programs. This is all effectively a big simulation. It's the closest thing to reality – there's nothing better – but it's still a computer program. So that's a view which I got very interested in when I was in college. And actually one of the reasons why I was so interested in it, is because you can actually do some experiments using a technology called Cellular Automata. There's a famous cellular automata program that's called "The Game of Life". Anybody can get it and put it on your computer. What it does is it basically has some patterns that grow and evolve and move around in a very lifelike way. But what's interesting is that the computation that is driving that is incredibly simple. It's just looks at every point and points around it, and counts the number of points around it which are colored either dark or light, and if certain number is colored white, you can see those pixels, and if there's a certain number of black, it turns them black. That's simple. That little rule applied to every pixel on the screen effectively generates these unbelievably lifelike, dynamic, animated patterns that even have stable things that move around and interact. It's like a whole world; it's amazing. It turns out there's a huge array of cellular automata systems like that. Stephen Wolfram is the person who's done the most by looking at the space of those possible systems you can create.
Anyway, what was interesting with digital physics is that with cellular automata, it provided a workbench, or a platform on which you can do some experiments or play around with. And so, through exploring cellular automata, I took it to a deeper level beyond just consciousness of mind and started to think, it's a whole universe of computation. And again, there's this question of qualia: if you were able to simulate a universe, is there a difference between a simulated apple and a real apple? Is there a difference between a simulated person and a real person? Or are we living in a simulation? I know this is a question that you'll probably be getting to. So I was thinking a lot about that. And these kinds of questions eventually drove me to Buddhism, which really became a strong interest of mine. I had this interest for all my life, but it became particularly strong when I was in college when I reached this point in philosophy and physics, when I really found the edge that our theorists couldn't get near – and I wanted to see what's over the edge. And I came to the conclusion that our approach in Western materialistic science will not actually get us much of a view over the edge. And if we really want to see what's going on at a larger scale, if we want to see outside the system that we're in, if we want to see the background not the foreground, then you have to go to a different level. And that's why I got interested in meditation, because that's what meditation is focusing on. It focuses beyond all other things that you can focus on. And so that was the thing that pushed me – the transcending, if you will, beyond the limitations of the Western view.


Stephen Ibaraki: That's fascinating, and I guess it extends beyond Daniel Dennett and "Consciousness Explained" or the work of Brooks at MIT, this sort of insect behavior being modeled very simply by very simple rules and computers. There's this next question, and Alex says, he's one of the advisors of the Peter Drucker Academy in China, and he wrote an article in 2009, in the year of 100th celebration of Peter Drucker. He says:


Alex Lin: I hold in high esteem his "Knowledge Society" and I'm putting all my efforts to make it happen through What do you see as the top three influences of your grandfather, and how has your grandfather's global influence shaped your thinking and your life? Mr. Drucker once commented that knowledge is enterprise. Can you please comment on how knowledge sharing, creation and management help business?

Nova Spivack: One of the things about my grandfather that many people appreciated was that he connected across different disciplines. His mind wasn't limited to one field, and he made amazing connections and really integrated different fields, different trends and different thoughts in ways people had never seen before. And I think what I appreciated the most about growing up and having the privilege of him spending time with me (especially summertime, we would take walks to the mountains and enjoy conversations, just he and I). It's really how he would connect things together.
It's interesting, he thought of management as a liberal art, because it really did connect all of the different things together. In particular it was really about people; it had a humanistic focus. Some people think management is a science. He didn't think it was a science. He thought it was an art, a liberal art, like philosophy, psychology, history or even economics (which is on the edge). Anyway, he had an incredible knowledge of history in particular. I think one of the most impressive things about him was that he really knew history. Not just US history or European history, but Asian history, Muslim history - he knew it all. He had this incredible knowledge base in his mind and he could bring that to the foreground anytime he wanted. So you'd have a discussion with him, and he'd be spouting off names, dates, historical examples for whatever it was that he was talking about. That's something that rarely happens; very rarely do you ever hear about history alone, and that makes history very relevant. And that's something that we've lost.
I think a big piece of wisdom is actually history. It's learning, it's the lessons of the past, it's from your ancestors, it's from the elders, it's the benefit of experience and time. That's something my grandfather really had. He had incredible wisdom. He was very wise, a kind of guru, really. He was the kind of person you don't really meet that often anymore. And in our civilization, we don't focus on that anymore. Our civilization is focused on the present, on the now. We are not really thinking much about the past – or even about the future. And by the way, that's a big shift (something I think about a lot). Moving into this new era, we are completely focused on the present, on the now; it's a now-centric civilization.
In the agricultural era, we focused on the past because we had to be; we needed to understand patterns from the past in order to understand what will happen to our crops. In the industrial age, we got focused on the future, because it was all about progress and innovation. But now, we're in the information age. And it's all about the present. Because we have to be in the present to cope with the amounts of information and change that we have to deal with all the time. Anyway, my grandfather was somebody who was NOT stuck in the present. He was thinking about the past, he was thinking about the future, and he was bringing those into present. So he was constantly connecting all of these different time scales to everything he was doing. When you'd talk to him, he would always connect those. That was a big influence.
Another big influence was that he is well known for coming up with the term "knowledge worker", "knowledge work". So he really pioneered this whole idea that instead of creating value with their hands, people were going to create the value with their minds, and that was going to be the next big thing. That was going to be the big focus of the future century. He was completely right about that. And I think today we are all knowledge workers. In fact, almost everybody is a knowledge worker; fewer and fewer people are not knowledge workers. I mean, there are laborers (there are a lot of them), and even laborers are becoming knowledge workers as technology reaches to their jobs and connects either by giving them information, getting information from them, assisting them. When you call somebody in a call center, a computer is now assisting them, giving them information as you're talking to them. So it turns out, that person is turning from laborer into a knowledge worker. Even somebody working in a factory, they are now working with robots with displays and information systems as they assemble things. So now they're also knowledge workers. So knowledge work is penetrating all aspects of our civilization.
And my grandfather spent a lot of time with me, discussing the implications of that and also my own ideas about collective intelligence, the global brain and whether or not you can create things functioning like mind for organizations, even governments. So I thought about that a lot. Could you make groups smarter, take some things that I've been thinking about and build systems which would enable groups to be more intelligent, the way humans are intelligent, or even slightly more intelligent than the groups are today? And would that make a difference? So we debated about that. And we also had a lot of debates about whether or not organizations are organisms. One debate was focused on that, and I took a position that organizations were real organisms, and interestingly actually, he argued against that. He didn't think they were living organisms. He didn't feel that. He focused on the individuals; he thought the organisms were the people, and organization was non-living, almost like a shell. But it was an interesting debate that we had.
And I think later in his life a big focus was what he called the social sector, which is a sector of our economy which fulfills a certain role that the government used to fulfill. And that could be providing medical care, mental health care, or relief agencies, charity; these functions, which help people in need to provide education, or food or shelter. Traditionally, they were considered to be functions that government had to fulfill, but that has shifted. So at a later part of my grandfather's life, he really focused on helping non-profit organizations and helping the social sector to develop new disciplines, new tools, and to become more professional, more mature, more evolved as a sector. Just as he had done the first part of his career for the corporate, the for-profit sector of the economy, in the later part of his life, that was a big focus for him. And I think that's something I'm just beginning to understand myself. I've certainly done a lot of charitable work on my own, but I think my grandfather's thinking on that is something I'm not as familiar with. The area where he and I were connected the most was around knowledge work.


Stephen Ibaraki: I can see the influences though, and then you adding to those influences to make your long-lasting historical contributions. It's a really interesting this idea of reaching back in the past and integrating those lessons, contrary to Eckhart Tolle's idea of living in the now. This concept that those who forget the past are doomed to repeat it, and maybe they're in conflict; how do you resolve that?


Alex Lin: Now, in your social semantic solutions, how do you deal with this fragmentation of the semantic web? And a follow-up question would be: I think Nick Bostrom's simulation hypothesis sounds like a contemporary version of the incompleteness theorem of Gödel's mathematics. What are your views on this?

Nova Spivack: The first part of the question is, how you deal with the fragmentation of the semantic web? What that question is asking when we were talking earlier, is could we come up with a way for computers to understand data without having to be programmed in advance to know what that data means, and there are standards the World Wide Web consortium Tim Berners-Lee, who’s the inventor of the Web, and the standards of the semantic web were designed to get everybody using one language for saying what data means, what a piece of data means. So all applications, all software could understand it. If those standards have been widely adopted, which hasn't happened, then we would solve this problem of fragmentation.
The fragmentation problem is that I program – my application understands my data, but when your application sees my data, it can't understand it. It doesn't know what the basic assumptions were, what was this field supposed to be for? You use this weird special label on the field – what does that label mean? Is this a price, or is this a date? What does this number mean? What do I use it for? How do I interpret the data?
The semantic web is trying to solve that problem. For example, if you had a customer record in your database, you could define each field in a way you know what it means: this is the first name, this is their last name, this is their company, this is their address. Can we define those terms in a sort of universal way, so that every application knows this is a customer record, and say "Oh! I know, this is a customer record. I know exactly where to find their address, or I know which field has their most recent purchase?", for example. You can't do that today, because there is no standard that everybody's using for defining data records. Semantic web is trying to solve that. Since it has not been adopted, we don't have a solution. And so the answer is, my social semantic solution does not solve that problem. That problem has to be solved by standards, and unless people adopt those standards, it won't be solved.
There is one other option, which is brute force computing, which maybe someday will be smart enough that we won't need the standards, because it will be as smart as a person and the application can see the data and figure it out by itself. But in the near future, we are bound to have this fragmentation. That is, some applications will understand some data, and some applications won't understand that data. That's just the way it is. It's going to be messy, people will have to do a lot of work to integrate things, and all of this could be avoided if people adopted simple open standards. But they are not doing that for various reasons, some of which are competitive, that is by not making the data open it makes it harder for other companies to do something with it, which means that they have to use your software. And that's actually the strategy that's been used very successfully by Microsoft for example, and others for decades. Google is doing the same thing, and until it becomes more valuable to a company to get everybody else using their data than it is to keep them from using that data, I think we'll continue to see fragmentation.
But in certain areas maybe they will start to change; for example around system integration. You definitely need to integrate the data, and there's huge value in there, and that is where the semantic web has been applied the most. So the standard has been used in system integration to solve some of those problems. So I think that will continue to be an issue. And what I'm doing with Bottlenose, which is a social semantic network (it's basically like Twitter, but semantic, a much richer, smarter system, and it works with Twitter, but it's a much smarter environment). We have our own semantics for defining what different pieces of data mean, and we hope to make that open and share that, to support open standards so that other people can do something with that data in the future. But that's the stuff that will happen in the future; we haven't launched the product yet.
Now the second question is about Nick Bostrom's simulation hypothesis. That's a very interesting and important concept. In a nutshell, his simulation hypothesis essentially says that "probability indicates that it's likely that we are living in a simulation." Now, that's a simplification, in fact there are three scenarios. It's actually three questions, and the two first questions basically are asking, "are you living in a simulation," or "do civilizations that reach a certain level of sophistication generate simulations of reality?" And if that doesn't happen, fine, then you're not living in a simulation. But statistically looking at various possibilities, it's more likely that we are living in a simulation, if in fact civilizations are capable of evolving to the level that they can do that. So basically, if civilizations can ever evolve to the point where they generate simulations of reality, then it's likely that we ourselves are living in one of those simulations. It's very difficult to explain. It's very difficult to understand, in fact (it's heavy reading).
Returning to the question, the gist of the question is: are we living in a simuation? I'm not sure exactly how that relates to the fragmentation of the semantic web, so I'll treat it sort of as a different question, but my answer is I think that's the wrong question. Ultimately, if we are living in a simuation, then it means very likely that that simulation is happening at a higher level. So if you follow Nick Bostrom's hypothesis to its conclusion, it's simulations all the way down, which is actually something we used to say in physics - we used to say "turtles all the way down", which is based on a famous case. An old lady came to a physics lecture and the physicist was talking about the universe and she said, "Oh, forget about all that stuff. The universe is on the back of a giant turtle," and he asked her, "And what is the turtle on," and she said, "Oh, it's turtles all the way down. That is turtle on turtle and so forth."
I think the problem with Nick Bostrom's hypothesis is that it leads to a conclusion that it's simulations all the way down, so it doesn't necessarily tell us anything all that useful. It doesn't for example, mean that there's one thing outside that is simulating it, it's more likely that if we're living in a simulation inside another simulation, and so it's endless. I think it's a mind game basically. At the end of the day, who cares? What's useful about that? What does that change? If we were in a simulation, how would we live differently? Well, we wouldn't. Because it's such a good simulation that it seems like it's real.
But it does touch on the nature of consciousness and some of my interest in Buddhism. The conclusion of Buddhism is nothing is actually as real as it seems. And this notion of "real" needs to be investigated. So effectively, you could say everything is like a simulation. And from the perspective of Buddhism, everything that is appearing has the anthological status of a mental event. That is, when you see something like a chair across the room or a window, it's not a real chair or a window, it's an image or appearance of a window, or a chair that's appearing to you. It could be different to somebody else. It's not the same for everyone else. Nobody sees the same thing the way you see; everybody has a different experience. So effectively all we really know is that we have our experiences. And we don't know if there's anyone experiencing that thing the way we do. Everyone can only measure one's own experience. And so effectively, the world is a set of experiences; the universe is a set of experiences. And those experiences are conscious, mental events. That's what they actually are. So the Buddhist view is what we call the Universe, or life, or reality, is this collection of mental events that we're labeling as self or other, or are as real as the universe.
When you actually look for those things and try to find them, you find that you can't find them. You can find things that you call consciousness, or the Buddhist term "emptiness". You find different results, but they aren't things themselves that you can actually grasp. They're concepts. Like the void, like emptiness, like space: you can't grasp those things. So from the Buddhist view, this is a simulation of sorts. The cause of the simulation if you will, the software, is karma, a collected reserve of cause and effect from previous actions - what was generating the simulation, that's the program. And where it's taking place – that's an interesting question. It's not the same answer you'd get from Nick Bostrom. Bostrom says the simulations just go on forever; in the Buddhist view, it's not like that. There is a whole simulation we're experiencing, but there is an ultimate layer that you can actually discover and establish through logic, as well as through your own direct experience. That's the background, and that itself is not a simulation; it's not in another simulation, it's not a phenomenon of a mental event. And that's an important thing, and you have to find it for yourself. It takes a lot of study and work and years of training. But you can find that actually, in a matter of speaking - you cannot grasp it like other things, but you can establish it. It has an ontological status. It's actually more real to the things we call real. And so from that perspective we escape from this whole question, the infinite chain of simulations, the infinite turtle. We get out of that infinite regress, which is kind of a logical fallacy.
Now, in cognitive science interestingly enough, we are arriving to similar conclusions, I think, eventually. As neuroscience and cognitive science penetrate the mind and the brain, and how that works from the physical perspective, we're also reaching similar conclusions gradually, that we aren't going to be able to find the source for the simulation. We know the stuff happening, but we still can't find the thing experiencing it. It's not in the brain; the brain is inept.
If we look at things very closely, at a high resolution, we will find [the cutting edge] of a very sharp razor, so if we are very careful with making those decisions, if we look closely, it becomes clear. It's very hard to describe or prove in a very short conversation. In my own case, it's taken me 30 years to reach a level of really deep understanding of it. It's not something easy, but it's a very important question, "Are we living in a simulation?" I think it's a modern day way of asking the question religion has asked "Is that all there is," "Is it real". But the answer you get from Nick Bostrom is not the correct answer in the end. It's not the right answer. The answer you get from that is basically more concept. That’s not the answer. The answer is beyond concepts.


Stephen Ibaraki: Great. It's quite a fascinating discussion. We sort of turned in questioning back to the global brain and semantic web.


Alex Lin: So in your view, what are the visionary milestones of the global brain and semantic web? And the next question is what would be the evolution in the business model and the future with regards of these two?

Nova Spivack: First, let's define the relationship between the Global Brain and the Semantic Web. As I said, global brain already exists and has existed since we've had language. It's just gotten more global and become more and more of a brain over time. You can just say it's a collective intelligence of humanity. The semantic web is a technology, a recent technology, only a few decades old, which could provide the infrastructure which could make the global brain smarter. But the semantic web has different ways of doing it; there are different ways you can provide semantics, which basically is a metal language for saying what things mean. So when you see a word when you see a piece of data, the semantic web gives you a way to define what that means so that machines can understand it. That's all. There are different approaches to doing that. And whether the semantic web ever happens, and how it happens doesn't make a difference for the global brain. Sure it can make the global brain smarter, it can make it smarter sooner, but at the end of the day the global brain is already here and it will happen no matter what we do. It's been evolving as long as there have been humans, and I think in terms of milestones earlier in the discussion, we've talked about some key big leaps: invention of [spoken] language and then written language, the ability to print, printing presses, telecommunication networks – radio, TV, phone, fax machines, world wide web (which is the ultimate global printing press), and now the next step to do for intelligence is what we've now done for knowledge, to start to get intelligence out of the mind of experts and embody that into software, so you can make software that can give you advice, the way a doctor or stock broker might. That's the next big frontier, to do for intelligence what we've done for knowledge. And I don't know how long will it take, but I see that happening: many consumer applications now have more intelligence than they once used to have. And some are really smart, personal assistants that will do tasks, that you can even talk to. So I think we are really on the edge of that. Big milestones. That's a great question, and is something worth thinking about - what would be some of the future milestones as the gobal brain evolves. I think it would be hard on the spot to give you a really good answer to what the milestones [will be]. It's a great subject for a paper, actually. So I'll just throw a few ideas out there.
I think the critical threshold is when the parts can no longer function independently of one another, when they become a part of a whole. So what's the distinction between a part and a whole? Let me give you some examples from biology. In evolution, there's the notion of symbiosis. Lynn Margulis and others have done a lot of good thinking about that. In particular, in symbiotic systems you have different organisms that happen to be near each other. And some of them are more successful and are able to reproduce more effectively by teaming up in certain ways with others. So for example, you might have a single-celled organism in a pond, or in some ecosystem, and some of them are good at digesting stuff. And others are good at just sticking to stuff. Hence, if ones that are good at sticking to stuff are near the ones who are good at digesting stuff, they can team up. The sticky ones grab food particles and the digesting ones dissolve it, break it down, and thus the ones who are good at sticking could also benefit from the nutrients they share. And so these two different kinds of organisms can sort of cooperate, and don't mean to. It just happens. They’re useful to each other. But because of that, they become more successful. And so their population increase together. And gradually, the way evolution works, natural selection statistically starts to favor versions of these organisms that are better at collaborating with one another. So in fact, the organisms that are good at sticking to food don't have to work so hard at digesting, because the digestion is done by another organism nearby that breaks them down. Thus over many generations, as they continue to evolve, they start to specialize. They don't have to be as general; they don't have to be able to both stick to particles and digest them, because that labor is shared by specialists. And gradually, as they specialize more and more (again, it's not intentional it's just a statistical side effect of evolution), gradually their genes specialize, and eventually you get these organisms that literally cannot survive on their own. They must be with the other type of organisms to survive. They lose the ability to stick to food or to digest it on their own.
When that happens, there's a crossing point from parts to whole: they were separate organisms that could exist on their own, they were just parts which happened to be together, there was no actual whole. But suddenly, there's a transition point, where now they cannot live and survive apart from each other. Now, it's a new whole. The organism has shifted to a new level of order. The unit of evolution is no longer an individual thing, it's the collection of things. And that's when you get an organ instead a bunch of different parts; different organisms become a new organism or a new organ. And it's interesting, if you actually look at our own cells, you find out that within them, there are some organelles, and their DNA is different than our main DNA. Take mitochondrion for example, evidence points to exactly what I've just described having happened in our distant ancient past. Our own cells are formed from a symbiosis that happened long ago. And at some point in the distant, ancient past, some different organisms came together and gradually formed the basis of our complex cells. And gradually that same process continued, creating more and more sophisticated cells and even structures, and eventually, the theory goes, that led to the kind of organisms we have today. So our bodies really are the product of symbiotic natural selection and evolution.
Well, that same biological phenomenon may also happen in the case of the global mind. In that case (we look at it as at symbiosis of intelligence, of mind) there may come a time in the future where we can't even function, or even think really individually anymore. We will become so connected to the collective mind, and become so reliant, dependant really, on it, that all of our thoughts, all of our actions and decisions will somehow be influenced or be a part of it.
It brings us back to the beginning of this discussion today, where I was talking about the question that some day in the future what will happen when all of our decisions are so augmented by the global mind, by the Internet, by information coming from outside? That we are not deciding on our own anymore when we take an action. The question will be who's the actor. Who is acting? Who is making the decision? Who is doing it – is it me, or is it the global mind? Do I exist anymore, or am I just a neuron in a global brain? Who is actually doing this?
When I drive my car on a street in the future, and it's totally navigated by GPS, am I driving or is the global brain driving? So there will come a point, perhaps, when we'll have to ask that question, when our thoughts and decisions will be so connected, so hooked in, connected to this feedback loop of the global brain; that we won't even be individuals anymore.
When that happens, that will be like crossing over the point, just like an example we saw with the single-celled organisms. A long time later, something similar might happen in a way that our individual minds may specialize so much because of our ability to symbiotically rely on specialists, on information systems and services coming from the net. We won't be able to function on our own and when that happens we will no longer be individuals. We will no longer be a part, we'll be a new kind of organism, a new species.
So right now we think of humanity as creatures that walk around, that look like we do (we're humans, right?), we have a certain DNA, and we consider ourselves a state-of-the-art of evolution on this planet. But it could move to a higher level. And that new organism would be a combination of humans and machines and software, and maybe other species too, and we'll be operating at a global scale. The new unit of evolution could be huge organizations that combine all these things, or maybe the entire species of the whole hooked in to this massive computing infrastructure. We will no longer just be biological, there will be a large part of our intelligence that exists only on the web, in the form of an artificial intelligence that we interact with. We'll all be a part of one system, one intelligence. So if that happens (I think it probably will happen someday), that will be a major milestone. That will be the birth of new species beyond what we think of as human. It will be trans-human or it will be a cybernetic type of species that will certainly include humans, yet it will be much more than what we think of as a human biological body. So I think that would be a critical milestone.
When that happens, can we really say that the global brain is really mature. Until then, the global brain is kind of fuzzy. Its concept is there in a patchwork way, sometimes it's there, sometimes it's not. It functions very inconsistently, it's not very efficient. It hasn't really woken up; it's not yet an organism or an entity of its own right. But when we cross that threshold, then we'll really be able to say that the global brain is an entity, that it's a living being, that it's conscious through us. And maybe through other forms of consciousness that is the new species. Who knows when we as a species and as a planet reach that level of evolution, if we manage to get to that point without destroying ourselves. If we can reach that level where we can learn and think and cooperate collectively, so much more intelligently than we do today, will it be a better world? Will our lives be better? I don't know. Efficient doesn't always mean better. Sometimes inefficiency is the thing that’s most fun. So it's an interesting question. But you know, the next question is: is that milestone an important milestone as the planet perhaps opens the doors to contact, or interaction with advanced civilizations that have reached that point, which might exist off our planet? We don't know yet.
It certainly seems likely that we are not the only intelligent species in the universe. It's a big universe. And as we discover more and more Earth-like planets, the numbers are in our favour that very likely there might be many advanced civilizations in our own galaxy. And if they have been around for a really long time, longer than us – long enough to have space travel, that they can get from their star to our star, they've also probably been around for long enough to develop the computing power to go through their own singularity, and also to develop a global brain for their own species that's like what I'm talking about. So it's likely that if any civilization has been around for long enough, they have the technology [allowing them] to come to us. They probably also have a collective mind, for it's a natural direction where the evolution is heading. And for any technologically advanced civilization, that's probably where they end up. And if we really want to interact and communicate with a civilization like that, we probably need to be on the same level with them. For example, an ant would probably have a very hard time communicating with a human. Such different levels of intelligence. So we humans are ants when compared to advanced civilizations with collective minds that might exist. So if we really want to interact intelligently on an equal level with these other civilizations that may exist, we have to first reach that level. And I think that's perhaps the next several thousand years of our evolution. Maybe it will happen sooner.


Stephen Ibaraki: So the global brain is the Borg of the future?

Nova Spivack: I think so. I think the global brain is the Borg, the scientific metaphor from Star Trek. It's a very negative view of the global brain, that by being in a global brain, you somehow become an automaton, you can lose your value, you can become a worker bee in a hive, become expendable. That's one vision of the global brain. But that's not the only vision of it. There are different ways how the global brain would happen. It could happen in a way it values the individual, where the individual is not expendable, the individual is a very important part of the global brain. It's just like organs are very important in your body, because they're not expendable: you cannot live without your liver. So in the same way, the role of an individual in a global brain may be more important than we think. We don't know that yet. Or maybe some individuals would play a more important role than others. We don't know. It might be organs or organisms that play important roles in the global brain. I tend to think that organisms are going to be the most important next unit in our evolution on this planet, and organizations are the next unit, and after that we're going to get to the global brain itself.
So first we need to make organizations really smart. Before we try to make the global brain, let's try to make a company smart. Let's try to make a government smart, or university smart. Can we innovate there? There's a tremendous potential to create mini-global brains within organizations by making them smarter. That's much more doable than trying to make it for the whole of humanity. So take large governments or take large corporations, instrument it, add the right technology and actually augment everybody's intelligence. Start to really experiment and innovate with collective intelligence. It's worth going on. There's a lot of thinking - even at MIT, there's a center for collective intelligence, where people are thinking about these questions.
But I actually think there are technological approaches that go beyond groupware, go beyond collaboration, that use machine learning and augmentation that really start to take things to the next level, to help members of organizations become smarter and more effective and more productive. And it would also help organizations as a whole, to work more productively and operate more intelligently. I think that's the nearest frontier. After that, it's systems of organizations, enabling them to be intelligent. That's what eventually gets us to the whole global brain itself being massively intelligent. A long-term threshold would be when an individual in all aspect of his/her life is connected to one or more of these collective intelligences. They just can't function on their own anymore. They are so reliant; people are born with it, they grow up with that, they never experience life on their own.
Gradually, we will specialize. We will lose the ability to function, to think on our own. [Take] for example today's human memory. What's going to happen to human memory in the next hundred years? Now we don't have to remember that many things since you can look it up online. Will we actually lose the ability to remember things as we used to?
When I travelled in Asia, I studied with monks who memorized texts. They didn't have computers and for thousands of years, what they've been doing was memorizing. These guys memorized thousands of pages, word by word, and they can repeat to you anytime you want. That's an incredible power of memory. But if you look at anybody in the West, any college student in the West, they have a hard time memorizing even one page, not thousands! Even one page, even a paragraph would be difficult. So we can sort of see; there's a big difference in the memory of people who grew up with oral traditions and people who grew up in an information technology environment. As people are born and grow up, playing video games and seeing the world on the web, they don't have to memorize things. And gradually, maybe the selection won't care that much about memory. Maybe it will find it not important to survival, and gradually humans will evolve without that much memory. They'll need it – the memory will live outside. Similar things could happen to other cognitive functions. And that's really the question, to what degree will our intelligence start to be outsourced, if you will. What percentage of our intelligence will come from outside us in the future? If that increases, then eventually we become the Borg, or some kind of global brain.
And the way in which that happens, and the degree to which our own intelligence matters still matters in that system, is going to be the big question. What kind of life, what kind of world is that going to be? And some of that depends on policy not just technology, choices about identity and privacy, freedom of expression. These choices that we’re making as a society, as governments, as technologists are actually the DNA if you will, of the future of our species. We may be creating species just like a bee hive, or we may be creating species which is more like a pack of wolves. There are lots of different models, and a lot of these depend on individual settings, or preferences: for freedom, individuality, communication, identity, privacy. These basic settings make the difference between what kind of collective society will evolve. And in the end, if we build ourselves into something like a bee hive or colony, those are good at certain tasks: they're good at foraging, they are good at finding food. So that's great – it's good for search, basically. But they're not very good at innovation! They only do one thing, and that's all they ever do. They never change, they never invent a new way of doing that.
If you want to build for innovation or evolution, you’d probably want to build a very different kind of society, and there you'd need different settings for these variables (privacy, freedom of expression, individuality and so forth). And there are two groups of people who are making these decisions today:
First the technologists who write software. They are making these decisions whether they know it or not. They could ultimately affect millions of people. So they need to think very carefully about what philosophy they are building into the software. What belief systems, what DNA for intelligence do you want to make?
The second group of people are those making policy: politicians, governments and others. And their defining rules for privacy and communication and free speech and all of these policies are also feedback onto our technology and the kinds of systems we build. In a way, what we are talking about here are the forces shaping our collective DNA. I won't presume I know the right answers, but I will say that I think people don't fully realize the importance of the important long-term decisions and their implications on our society, or governments, future organizations and the future of our species. Something to think about for sure.


Alex Lin: I found some amazing accidents. You and I were born in the same year, 1969, and Nick Bostrom and I were born on the same day, March 10th. It's the call of destiny. I'm thinking that we should build a long term dialogue between us around both the technology and philosophy of the West and China. Last year I talked to a managerial guru, Peter Senge, the concept coiner of Organizational Learning. Chinese philosophy, especially the philosophy of Taoism and the Book of Change, can make great contributions to the world civilization during this '2nd Axial Time' (Karl T. Jaspers, 1883-1969). In speaking draft, I'm integrating the wholeness and dualistic-symbiosis change of Chinese philosophy with the Theory of the General System and Evolution and Quanta Transition of Western philosophy. History always replays: about one hundred years ago modern physics owed its achievements to a combination of Chinese and Western ideas. Now as Internet stories happened, I see an integration of Chinese and Western ideas again evolving further. What are your thoughts on this?

Nova Spivack: Well, I completely agree. I actually think [as for] most of the ideas I've been talking about today, you can find their roots in Chinese philosophy. In Buddhism itself, it's really a combination of Indian and Chinese thoughts. And at a time when they emerged, they weren't as distinct as they are today. And so they both come from the same source. And so my thinking and my approach (and actually, my grandfather's as well), were really shaped by the Asian mind, the Asian thinking and certainly, Asian philosophy in general. And I definitely believe that I am much closer to the Asian view than I am to the Western in the way I approach my view of the world. I think the West has so much to learn from China and Asia in general. We're a very young civilization here on the West so we don't realize that. We're like a young child. We're very overconfident that we think we know all the answers. Actually, the societies and civilizations that have been around for thousands of years are probably looking at us laughing, the way an old person might look at a child and laugh.
So I think we're doing some new and wonderful things. We're going to make some of our own contributions, and we have already brought a lot of new ideas to the world that are valuable. And that's the value of youth, it innovates, it's creative. But the values of Asian wisdom are also equally important. And I think that's really the balance of what's going on with Western and Eastern civilizations. Just to recap that, I think that China and the Chinese philosophy bring history, and the Western approach (The United States, Europe and emerging Western countries) can bring new ideas and if you combine them, you can do something really good. I don't think either one on its own would be as good as the combination. And I do think that history does repeat itself in many ways. And with Internet, we're certainly seeing that again, as we saw with physics and so forth (as Alex mentioned).
I think the dialogue between the West and China has been going on for some time, and it's been happening at many levels and many dimensions. I think that as we get farther along in the path of developing a global brain, I think people will look more and more into Eastern philosophy to understand and navigate in the new complex landscape we end up with.


Stephen Ibaraki: It's interesting, (Alex brought this out), this idea that you were born around the same time. Do you believe in those kinds of coincidences?

Nova Spivack: It's interesting. I was born in 1969, not long before the Apollo landing, and I like to joke that I waited before that happened – I wanted to come in around that time. But who knows, I do think these coincidences are interesting. I think there are a lot of people that were born the same time, not necessarily the same day, but the same generation, who all have the same idea that is driving them. And I don't know where that comes from or why we all have that idea. That's an interesting question – and maybe that comes from the global mind. Who knows?


Stephen Ibaraki: Or maybe it ties to what Malcolm Gladwell talked about people who were born in mid-50s (Gates, Jobs and others), who made these contributions. It seems to be these different sorts of eras.

Nova Spivack: Yes it's possible that there are waves that focus on different things. Nobody knows. It's interesting, for example, how the different people from different parts of the world who have never met all start thinking about the same thing. You see a lot of startups. You get a bunch of startups, which were all worked in secret, and at the same time they develop similar ideas. Why is that happening? It happens with the scientific discoveries too, where all of a sudden multiple scientists will independently discover the same (and radical) things. Rupert Sheldrake and others have taken that even farther, and said there's some kind of a field; there's some effect. And whether or not you believe that, it's an interesting hypothesis. Maybe there are ways the ideas are transmitted which we are not aware of yet, which don't require a physical medium. We don't know. We are only at the beginning of our understanding of consciousness, the mind and what's possible.


Stephen Ibaraki: A hundred years from now when we reflect back, there was this recent IBM-Jeopardy Challenge. Do you think that will be some kind of an inflection point?

Nova Spivack: I think it was a nice PR stunt. Does it say anything significant, does it establish anything of lasting importance? No, I don't think so. I think it will be a point on a timeline. It won't be a major change. I don't think the IBM-Jeopardy Challenge will be a birth of a new artificial intelligence. I think it showed that artificial intelligence can be pretty good at answering trivia questions, which is I think an interesting data point. But it's not consciousness, it's not that IBM-Jeopardy system still would be able to go to a fast food place and order a hamburger on its own. So it's still pretty limited on what it can do.


Stephen Ibaraki: Nova, it's been a real pleasure talking with you, and I know your schedule is very demanding, so we are fortunate to have you come in and do this interview. Thank you for sharing your wisdom, substantial experience and historical contributions with our audience.

Nova Spivack: Thank you. It's been my pleasure and I look forward to hearing the interview.


Contact | Media Report | Top Interview | Celebrity Signature | Members Brief | Promotions

Copyright © 2024 All Rights Reserved.