An interesting article in a June 2018 issue of The Economist, ‘The History of Bigheadedness’ discussed studies in to why human brains are three times bigger than our closest living relatives. The studies found the genes NOTCH2NLs to be uniquely active in humans. These genes encouraged stem cells to proliferate rather then simply turn in to neurons. So naturally, when those stem cells did eventually turn in to neurons, this resulted in more neurons than normal.

The study of NOTCH2NLs in humans may contribute to explaining how we got our bigger brains, but not why? Such a mutation would need to have been favoured by natural selection. In other words, why were big brains deemed useful? Whilst humans now dominate the Earth, this hasn’t always been the case and smaller brained animals have survived perfectly well and prospered far more inexpensively and less calorifically.  Some of the potential reasons for why bigheadedness evolved, as listed in The Economist, include tool-making, impressing potential mates, much like plumage on birds and the Machiavellian intelligence hypothesis, whereby bigger brains have allowed humans to effectively manipulate, unlike other animals. 

Thinking about the evolution of the human brain seems highly relevant when we think about the prevalence of AI and machine learning where machines are starting to work off the blueprints of our own brains. DeepMind’s AlphaGo machine famously beat a grandmaster at the ancient Chinese board game, Go. Whilst machines beating humans is not a new phenomenon, the type of game and the way in which the machine won is what sets this instance apart as revolutionary. Go is not a game in which the machine can simply work out every single move at a superhuman speed. The game requires human intuition and the machine was able to mimic human thinking with reinforcement learning and neural networks. 

Progress is being made it seems, to ‘formalise intelligence’ and get an understanding of exactly what intelligence is and how it works. The question around AI and machine learning is about whether humankind can keep up with the technology it creates.

After all, certainly in the context of healthcare, AI and machine learning seems to be a step forward in the right direction – however would this be the same sentiment when adopted by other industries? It usually isn’t the technologies and the advancements themselves that raise concerns but the people behind the technology and how they choose to adopt it and safeguard those exposed to it. 

Devneet Toor