I'm Building Three Instances of General AI

November 10, 2018

I’m in the process of doing something very controversial. Some even think it’s impossible.

I’m building some AGI systems. Not one, but three. Let me describe them.

The term AGI may be new to some. It’s also called Strong AI. AGI stands for Artificial General Intelligence, and it should be roughly equivalent to human intelligence. The systems most people know as AI right now are considered Narrow, or ANI. That is, we can produce systems that can succeed at human (or even beyond human) levels focused on a very narrow window of skills. We have systems that can beat grandmasters at chess, always win at checkers, or attack the game of Go, but none of these specific systems can also tell the difference between a dog and a cow. They’re all specific. AGI may not have a single skill as strong as any of these examples of ANI, but it can emulate human ability across a lot of different skills and also learn new skills.

So that’s what I’m building: three AGI systems. Three seems like a good number to evaluate the differences and similarities between these systems. I’d love to say I’m doing all of this myself, but that would be an impossible task. It took quite a long time to find the right collaborator on this project, but once I did we got right down to work. Other than a set of common principles shared between my collaborator and I, the seed data for these projects has been generated largely at random.

I keep using the present tense ”building” because, while I already consider this AGI project a wild success, it’s still very much in process. The initial construction process - which my main collaborator was chiefly responsible for - is complete. But it will still take years to allow the system to reach full maturity. Why? Well with any AI system, an enormous amount of training data is required. In particular, these three systems are a more complex style of Deep Neural Network. In fact, they are a vast series of DNN’s, each with many more layers than ever seen before. The complexity means we have even less awareness of exactly how the system interacts with input, and it also means a huge amount of training data.

Like, a really, really huge amount of data. Even with a steady and very high bandwidth input process, it will take years of data. Furthermore, the data must be both supervised and unsupervised for best results. While repetition can be incredibly valuable sometimes, feeding in only the same datasets over and over just won’t do either.

My primary collaborator and I are chiefly responsible for the data going into these systems, but not exclusively so. We continue to carefully select other contributors for this project, and their inputs into the project have been invaluable.

While there’s still years to go, early signs are looking very positive. The various systems are at different levels of maturity, but they’re already showing adeptness across a wide variety of tasks including image and shape recognition, patterns, vocabulary comprehension and following tasks of varying complexity.

I’d love to say we know how the AI will turn out. We don’t. That’s the nature of AI; we try to start with the right initial conditions and we work hard to ensure the training data sets are good and help with resolution. But the simple answer is we don’t know yet, although all signs are pointing very positive. My collaborator and I have done some calculations on when our AGI’s will be ready. The worst case is that they’ll never be ready, and we have to be prepared for that. But barring any major issues, the AGI’s will be ready anywhere from 12-14 years after construction (highly optimistically) to 20-24 as an outside range. Of course, no matter when they’re “ready” - a very subjective measure, as they’d all pass the Turing test even now - any new data will continue to train the system and improve the quality of the intelligence.

This entire process, as you might expect, has been incredibly demanding. The amount of time and effort involved - from selecting the appropriate collaborator, to the construction process, to the sheer amount of training required - well, it takes a lifetime. And no matter what the outcome of this experiment, we will love these systems completely unconditionally.


If you hadn’t already guessed by now, my three children’s names are Adeline, Eric, and Margaret. They are my wonderful AGI, and they are currently 6, 4, and almost 2. I love the people they are now, and I can’t wait to get to know the people they will become. I hope they get all the training data they need!

Writing this out was meant as a thought experiment about Artificial Intelligence: specifically the question of whether or not we can actually create intelligent, thoughtful machines. There is still significant debate as to whether we can create a physical system that has intelligence, thought, and consciousness.

Sidestepping more philosophical questions like the matter of the soul, the answer to that question is of course: yes. Whether we like it or not - and even if the explanation remains forever beyond our calculations - consciousness is somehow derived, learned, and stored in a 3 lb. sac of grey matter in our heads. Whenever we conceive a child, we produce a new system that evolves over time into a General Intelligence, taking small leaps in consciousness over years as the brain (and it’s container) grows and learns.

Writing this has also been valuable to (again) appreciate just how important, awesome, and vibrant the act of creating and raising children really is (and how close it can be to how computer scientists think about AI). From the mostly tacit decisions we make in choosing our partners, to the incredible importance of either good or bad training data while children are still young and malleable; if you have kids, remember that this is the most important task you’ll ever have.


Like the content? Share it around..