The text below focusses on artificial intelligence and is my personal interpretation of the keynote sessions I attended, so not necessarily completely factually correct.
Keynote by Sir John Sawers
Sir John Sawers, former chief of Secret Intelligence Service (MI6) started off with providing the audience with a macro perspective on the challenges we are facing in the years to come. He identified three key topics:
- The dominant forces are back. The United States, China, and to a lesser extent Russia, are battling for world dominance. The focus of the last 50 years on collaboration and removing (trade) barriers has been replaced by protectionism, military power and a battle for dominance in the technology space. Countries are decoupling, each pursuing their own interests. This even applies to the internet, it is already being Balkanized by countries like Russia and China.
- Populistic parties are on the rise. Often driven by a growing inequality, more people vote on ‘strong leaders’ and populistic parties, thereby polarizing the political landscape. So also within countries we are increasingly decoupling.
- Importance of technology. With the decoupling of countries, access to military technologies and other strategic technology areas has become a matter of state policy. The United States, Europe, China and other countries now closely guard those industries they consider vital to their national interests. At the same time is technology increasingly used by (non)state actors to infiltrate and disrupt vital infrastructure elements of other countries.
The most important technology for the years to come is Artificial Technology (AI). Its impact on both military capabilities and GDP (10x increase in world GDP à $13,500 Trillion) is huge, easily offsetting the billions of dollars invested by companies and countries.
Keynote by Stuart Russel
The second keynote by Stuart Russel, godfather of modern artificial intelligence, elaborated on the history and limitations of AI. He started off with some history (e.g. Aristotle predicted around 340 BC that workers would be irrelevant if something like AI would evolve, and Turing predicting something similar again in 1951).
Stuart Russel then cautioned that we currently surf on a wave of optimism; we are in an AI gold rush. Billions are invested in AI, but the technology is still in its infancy. Despite the billions already invested in creating autonomous cars, we are still billions away from achieving the desired objective. While Russel expects car manufacturers to continue their investments in the technology until it reaches maturity, there might be other areas where AI proves too costly (for the foreseeable future). Hence, a considerable share of the billions invested in AI will not yield any return, but the potential upside is too big to be ignored.
There are also other signs that indicate the lack of maturity like voter manipulation via social networks and racial bias. Both are undesired and it is primarily up to the engineers to prevent this as they create the software and model the objective. Regardless of the objective, Russel points out that machines should always be beneficial to the extent that their actions can be expected to achieve our human objectives. They should never be allowed to pursue their own objectives.
He also mentioned that the amount of data will become less relevant as the software underpinning AI becomes smarter. We will need less data to reach the same or even better decision. Hence, data is not the new oil. In time, its relevance will decline and for me one of the eye-openers of the session.
After covering the difference between programming related to machine learning (ML can learn only) and probabilistic programming (PP can learn and predict and is therefore more useful as it can give answers and anticipate), Russel moved on to the second key take away for me: we cannot predict when AI will reach or surpass human intelligence. There is no ‘Moore’s Law’ for AI. The critical conceptual breakthrough can be tomorrow or in twenty years. To make his point he used the breakthrough required to eventually build the atomic bomb. In 1933 one scholar in the United States declared somewhere in September of that year that it would be impossible to harness the power of the atom while 16 hours later another scholar in Germany came up with the required breakthrough.
Also interesting is the observation that we humans are not yet prepared for AI which is smarter than us. Our ability to think is what gives us meaning in life. What will we do when robots have become smarter than us? It would allow for ‘everything as a service’ but at what social cost? Will we all degrade to a kind of vegetative state, spending our lives consuming content and food?
Keynote Garry Kasparov
The second day started with a keynote by Garry Kasparov. He shared his insights on AI using the advances in automated chess engines as an example. My key take away from this session is twofold: a) these engines win because they make fewer mistakes than humans, they are not smarter; and b) by combining human and machine, we can create 1+1=3.
Let me explain the last one a bit more in-depth. Machines that learn structured games like chess and go start from scratch. They are not fed with the existing body of knowledge available from humans playing the game, but play millions of games to find out the moves that most likely result in winning the game. This also results in blind spots, because it will ignore moves that from a statistic perspective are less likely to result in winning. However, humans know due to their body of knowledge that some of these statistical outliers actually results in winning the game. Even worse: due to the millions of games played and the resulting deeply ingrained framework used by the machine to make decisions, a human can use the winning move thousands of times before the machine learns to adapt its framework.
We are heading towards a less stable and more isolated world whereby Artificial Intelligence is going to be a huge game changer. Therefore, AI is a technology which cannot be ignored, but it remains unclear when AI will surpass human intelligence. It is also yet to be seen how our days will look like when we are no longer the smartest kid on the block.
The second part will be less abstract and focus on my lessons learned regarding trends in cyber defense.