I listened to a podcast interview with Kai-Fu Lee a few months ago and immediately ordered his book, AI Superpowers: China, Silicon Valley, and the New World Order (2018, affiliate link).  Since then, I’ve order several copies for friends.

If you want to get smart on AI – and the business, social implications – this is a good place to start. I would recommend to watch / listen to a few interviews with Kai-Fu Lee, then go ahead and order the book. It’s in plain (easy for your cousin to understand) English. He’s got strong opinions about the potentially disruptive (read: scary) effects that AI will have on the economy, geopolitical power, and the meaning of work. Here’s what I took away from the first chapter, pages 1-21.

The quest for AI started in the 1950s

It’s easy for a layman like myself to mistakenly think AI is a new thing. Seems like  we hear a lot about AI in the popular press now a days, but computer scientists like Minsky, McCarthy, and Simon were pinging away at this since the 1950s. They were eager to give some type of human intelligence to machines for 60+ years.

It’s front page new now because – in many respects – we finally have the software, computing power, and data sets to make it a useful reality.

This is not 1997 IBM Deep Blue

For Gen-X like me, we remember when the computer beat Gary Kasparov at chess. While this felt monumental, I’ve since learned that this is “brute force” processing. First, it required chess champions to teach it guiding heuristics, and the challenge was limited to a 8×8 chess board. Basically, it was an engineering marvel – but far from something we would call AI today.

AI forks into rules-based and neural-networks

The way that Kai-Fu Lee explains it, there were two primary AI camps / approaches in the 1980s. Note, I am sure there are many more nuanced ways to bucket AI taxonomy, but let’s remember this book is for main street people like me.

  • Rules-based approach (aka “symbolic systems” or “expert systems”) teaches computers through series of rules, if X then Y.  Works well if simple, and requires experts to help code.
  • Neural networks approach mimics the brain’s architecture. Provide lots and lots of examples, and let the machine decipher the patterns. The machine teaches itself.

Last 10 years = AI Renaissance

There are three key factors for artificial intelligence (neural network approach). Call them the trinity if you like: software, data, and computing power. This drives the super-fast pattern recognition:

Data. More of this than ever. From the moment we wake up, check our phones, drive our cars, respond to emails, track our sleep on a Fitbit, pay bills, or Venmo a friend. This is all data. Sure, let’s call it big data.

Software. Apparently, Geoffrey Hinton is the name to know. He’s a computer scientist, called the “Godfather of deep learning.” Won the Turing prize in 2018. As far as I can tell, in the mid 2000s, he developed a new way to train AI. As Kai-Fu Lee commented, it turbocharged the neural neural networks, essentially putting them on steroids.

Computing power.  Yes, this has been a miracle for 40+ years (read: Moore’s law). This is how Deep Blue (IBM) got good at chess.  As Lee mentions, your iPhone has many millions x computing power than the NASA mission that sent people to the Moon. This allows deep learning to crush visual object recognition and natural language processing. M.a.s.s.i.v.e. computing power.

Deep learning = supercharged neural networks = narrow AI

Okay, this can get confusing so let’s recap a little.

  • Deep learning is a subset of machine learning. Got it.
  • Deep Learning is like neural networks on steroids (thanks to Geoffrey Hinton’s and Deep Mind).
  • Deep Learning is considered “Narrow AI” because you are essentially using a LOT of data in a specific (read: narrow) field to optimize a decision on a specific objective.

Okay, that’s a lot of business talk to say, “Deep Learning helps you optimize 1 decision,”  This is NOT science fiction, general intelligence AI, where the computer is thinking laterally, with free will. This is NOT Hal 9000 from the Space Odyssey. No, this is narrow AI. It uses a LOT of data, in a specific field to make a decision on a specific objective. Got it.

A few Amazon examples of narrow AI

Yes, it’s narrow, but it’s also massively useful. From this Economist article called, Amazon’s Empire Rests on Its Low-Key Approach to AI, April 11, 2019, here are 4 ways that Amazon uses deep learning. This is how the machine teases out patterns from the massive data to make a good decision:

  • Recommending products – that you might want to buy
  • Optimizing the movement of robots in the factories – to minimize human wait times
  • Tracking shopper movements in their Amazon Go (no human check-out person) stores
  • Forecasting demand for computational power at AWS cloud services

There’s no data like more data

Deep Learning depends on massive amounts of clearly labelled data sets. Often, these are categorized by humans (yes, a little ironic to have people cleaning up data for the machine to use). The largest labeling company is MBH, based in China, that has 300,000 people who label data full time. Wha?

Yes, it’s big business and that makes sense. If your data is junk, then junk-in-junk-out. A data labeling firm called Cloud Factory argues that 80% of AI projects are collecting, cleansing, and labeling data.  Here’s what good data labeling looks like: Ultimate Guide to Data Labeling for Machine Learning.

There is a lot of data to label, after all, apparently 1 hour of video requires 8 hours for full annotation.  According to the Financial Times, this industry will grow to $1 Billion by 2023; creating new jobs for folks in India, China, Kenya, Philippines etc. .

Two big AI transitions (implementation & data)

Lee argues that we are at two critical AI transitions which will tilt the AI advantage to China vs. the current incumbent – the US. The first is a shift in emphasis from discovery to implementation. The second is the emphasis on data.

1) Increasingly, the AI opportunity will be implementation

If we make the analogy between AI and electricity, let’s remember that the real money and impact of electricity was not in the initial discovery, but the implementation / business model creation around it.  So the rapid ascendance of AI use cases is good, but the businesses built around it will be great.  Think about the electrification of buildings and appliances, reduction in dangerous kerosene and whale oil, and all the real-world impact it gave us. Yes, that is what’s next for AI.

Two quotes from Kai-Fu Lee:

“Much of the difficult, but abstract work of AI research has been done, and now its time for entrepreneurs to roll-up their sleeves and get down to the dirty work of turning algorithms into sustainable businesses.”

 

“Many of the new milestones [in AI] are merely the application of the past decade’s breakthrough to new problems.” 

China has made AI a huge priority to be #1 globally in AI by 2030. Two factors which make that vision a likely reality: 1) Industrial policy: Entire cities are experimenting (at scale) with AI-friendly infrastructure 2) Relaxed social norms on privacy: if you apply for a mobile phone, you sign up with facial recognition.

For both good and bad, you will not see that level of policy coordination and privacy acquiescence in the US. Call it the “market forces” or call it “complacency”, but the US is just motoring along, hoping that the big FAANG giants have a plan.

2) The scarce resource will be data

Lee argues that in the AI trinity of factors, data is currently the most important.  Sure, we might have another Hinton-magnitude advancement, but we’re at a threshold where the algorithms are good enough, and learning so quickly, that the real comparative advantage is with harnessing data. Larger, more diverse, and annotated data sets.  The metaphorical fire is already burning, we need more wood.

“Given much more data, an algorithm designed by a handful of mid-level AI engineers usually outperforms one designed by a world-class, deep learning researcher.”

This is also a trump card for China. They have already surpassed the US in volume of data produced.  WeChat (from what I understand) is the single-source aggregator of data. Can you imagine if all your iPhone activity (like combining FB, Google, Twitter, Instagram, Venmo, Yelp, Gaming) went through 1 app? Cray.

So what?

The purpose of this blog post (and ostensibly the book) is not to ring xenophobic, or jingoistic alarm bells. First, it’s important to understand what AI is and the implication of its wider implementation (reasonable).  Second, it’s also key to disabuse people of the belief that the AI revolution will be slow or trivial (oh, okay). Third, as business people, we should think broadly about how AI is changing and who the winners will be (sure). Finally, just buy the book here (affiliate link)

AI savvy readers, what else do you want to say?

Did I over-simplify parts? Absolutely. My apologies, but let’s keep this 80/20 and move the conversation forward. What other elements of AI should we focus on?

Plan to blog a lot more on this topic, as I am learning. . .

Links:

Share This