Despite the hype, especially around self-driving cars, AI is writing code, designing Google chip floor plans, and telling us how much to trust it.
Given just how much of the AI hype is just that—hype—it’s easy to forget that a wide range of companies are having real success with AI. No, I’m not talking about Tesla’s continued errant marketing of AI-infused “full self-driving.” As analyst Benedict Evans writes, “[V]ersion nine of ‘Full Self-Driving’ is shipping soon (in beta) and yet will not in fact be full self-driving, or anything close to it.” Rather, I’m talking about the kinds of real-world examples listed by Mike Loukides, some of which involve not-so-full self driving.
To make AI work, you’re going to need money and good data, among other things, a recent survey suggests. Assuming these are in place, let’s look at a few areas where AI is making headway in making our lives better and not merely our marketing.
Write my code for me
The most visible recent experiment in enhancing human productivity with machine smarts is GitHub’s Copilot. Similar to how your smartphone (or things like Gmail) can suggest words or phrases as you type, Copilot assists developers by suggesting lines of code or functions to use. Trained on billions of lines of code in GitHub, Copilot promises to improve developer productivity by allowing them to write less, but better, code.
It is way too soon to know if Copilot will work. I don’t mean whether or not it can do what it purports to do; many developers rushed to try it out and have lauded its potential. And yet, there are concerns, as Simon Bisson points out:
You shouldn’t expect the code Copilot produces to be correct. For one thing, it’s still early days for this type of application, with little training beyond the initial data set. As more and more people use Copilot, and it draws on how they use its suggestions for reinforcement learning, its suggestions should improve. However, you’re still going to need to make decisions about the snippets you use and how you use them. You also need to be careful with the code that Copilot generates for security reasons.
There are also concerns about copyright and open source, among other things. Some think this sounds great in theory but will fade as developersget back to the practice of writing code. The key is whether developers find Copilot’s code suggestions useful in real programming scenarios, and not the pretty-darn-cool fact that it can do so at all. The best AI augments human creativity rather than supplants it.
The real autonomous driving
The reality of self-driving cars today, of course, is that they aren’t self-driving, but can assist drivers by taking on more of the load. (If only Elon Musk marketed this way.) The promise of autonomous vehicles has been hampered somewhat by their reliance on GPS, which can fail. But as described in the journal Science Robotics, scientists at Caltech have come up with “a seasonally invariant deep transform for visual terrain-relative navigation.” In human speak, this means that autonomous systems (like cars) can take cues from the terrain around them to pinpoint their location, whether that terrain is covered with snow, fallen leaves, or the lush grass of spring.
Current methods require mapping/terrain data to match almost exactly what the vehicle “sees,” but snow and other things can ruin that. The Caltech scientists took a different approach, dubbed self-supervised learning. “While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.” By using this deep learning approach, scientists have created a highly accurate way of improving how machines see and react to the world around them.
Not surprisingly, many of the things around a car are other cars. The Caltech approach doesn’t help here, but new research from a scientist at Florida Atlantic University’s College of Engineering and Computer Science is meant to learn from the emotions of human drivers and alter driving accordingly. No one is using this newly patented approach in production yet, but it points to a holistic approach to safety and trust in autonomous driving.
A question of trust
OK, OK. This is all still somewhat speculative, but what Google achieved with chip design is not. As described in Nature, Google engineers took a novel approach to floor planning, the task of designing the physical layout of a computer chip. Engineers have been trying for decades to automate this without success. But by using machine learning, Google’s chip designers took a months-long, laborious process and got results in under six hours. How? The engineers approached floor planning “as a reinforcement learning problem, and develop[ed] an edge-based graph convolutional neural network architecture capable of learning rich and transferable representations of the chip.”
To get to this point, the engineers pretrained an agent using a set of 10,000 chip floor plans. Then, using reinforcement learning, as the engineers detailed, the agent “learns” from past success to prescribe the next blocks to be set down: “At any given step of floor planning, the trained agent assesses the ‘state’ of the chip being developed, including the partial floor plan that it has constructed so far, and then applies its learnt strategy to identify the best ‘action’—that is, where to place the next macro block.”
It’s an impressive feat, but even more impressive, it’s actually being used in production at Google now. This means Google trusts the chip floor plans.
This brings me to the final project: IBM’s Uncertainty Quantification 360(UQ360). One of the challenges with AI is our (un)willingness to trust its results. It’s one thing to be data driven, but if we don’t fully trust that data or what the machine will do with it, it becomes impossible to let AI take the wheel. UQ360 is an “open source toolkit with a Python package to provide data science practitioners and developers access to state-of-the-art algorithms to streamline the process of estimating, evaluating, improving, and communicating uncertainty of machine learning models as common practices for AI transparency.”
In other words, it uses AI to estimate how much you can trust what the AI wants to do.
This is a great advance because it should breed more trust in the AI that increasingly guides the world around us. We’ve spent years being told the robots are taking over, though our actual experience is with advertising that continues to be bad at matching interests with buying opportunities. AI is becoming real, with no need to hype it to make its utility real.