On October 26, the Abelard School Computer Science class was privileged to attend the Third Annual Machine Learning and the Market for Intelligence Conference hosted by the Creative Destruction Lab at U of T’s Rotman School of Management, one of the world’s premier conferences on this topic. Speakers included leading scientists (such as Carnegie Mellon professor and head of AI at Apple, Russ Salakhutdinov, and MIT professor Max Tegmark), leading investors from Silicon Valley (such as Steve Jurvetson who wrote one of the first cheques supporting Space X and Tesla, and Albert Wenger of the famous New York venture capital firm Union Square Ventures), and pioneering entrepreneurs (such as Elizabeth Caley, whose AI startup to enhance scientific discovery was recently acquired by Mark Zuckerberg). Also making a guest appearance was Prime Minister Justin Trudeau.
Many thanks to Joshua Gans and Jennifer O’Hare for including our students in this conference.
Here are some student reports on this extraordinary event.
Aurora Bolianatz:
While at the Machine Learning and the Market for Intelligence conference on the 26th of October, I got to listen firsthand to many presentations on the future of artificial intelligence, and what it means to be human at a time when we are becoming obsolete. The speakers delved into subjects such as embodiment, brain science, and artificial general intelligence; however, the presentations I personally found the most interesting were the ones regarding the future of humans in an AI world, and how we can approach the subject wisely.
The first presenter to really spark an interest in this subject for me was Max Tegmark, a professor at MIT. He spoke about the incredible leaps and bounds AI development has seen in recent years, but also reminded us of how things can go horribly wrong so quickly without proper management. There were four main points he made on how to correctly control the approaching future of AI: the first, to make sure lethal autonomous weapons are banned. Biologists and chemists have both worked hard to ensure that the specialists in their fields use their knowledge for cures and beneficial uses, not for weapons, and both fields have banned bioweapons and chemical weapons. Tegmark encouraged us to follow suit, and make sure AI will be used for good, not evil. This follows through into his second point, using AI wealth to benefit all people, not only a select few. His third point was to invest in safety research, drawing our attention to the Apollo 11 launch, stating that NASA calculated every possible thing that could go wrong, and put safeguards in to ensure that no catastrophes happened. Much like the space launch, AI is a field where it is much better to get everything right the first time instead of learning from mistakes. Finally, Tegmark asks us what sort of a future we wish to see, as that is the truly defining piece for the future of AI.
Another presenter who spoke on the importance of humans in AI was Joshua Gans, professor at the Rotman School of Management at U of T, who discussed the value of data. Though many people today consider data to be immensely valuable, Gans argues instead that the data will become meaningless once used, and the true asset in prediction machines and the like is human judgement. The knowledge and skills we have accumulated as humans are so valuable to the AI process, as we design the machines, give them their purposes, and can clean up after them in case of all hell breaking loose. The human mind is an astonishing thing, and though AI have been able to recreate themselves, they simply cannot ever replace us in every way.
Contrary to the previous two professors mentioned, in his presentation, Ben Goertzel emphasized the importance of leaving humans out of AI development. While I see his point that AI learns better without humans hindering it, I also think his suggestion of a sort of cloud space in which AI could communicate and share information without human guidance is pretty terrifying. The amount of power and knowledge they could potentially accumulate is enough to wipe out humans as a whole, and while I understand that we haven’t been the best for this planet (or each other), as a human, I really like the idea of Not Dying Due to the Robot Overlords.
Ariel Gans:
The CDL conference was a lot of fun. The talks were all very interesting and I especially enjoyed the surprise visit from the prime minster. The topics I found most interesting were those discussed in the presentations by Richard Sutton of University of Alberta and Suzanne Gildert of Kindred AI. The base idea I liked from those two talks was that in order to prevent a Terminator Judgement Day situation, we need to keep our general intelligences’ goals and ideals in line with ours. This led to the idea that we should aim for cooperation with AIs, not control. If we create a truly sentient AI and we’ve given it our morals and creativity and all that, we’ve made something very human-like. Gildert talked about how these are likely the integrated AIs that we will end up creating. She talked about how there’s a huge market for robots that are essentially flawless humans. And if we create flawless humans, maybe we want to integrate them into society like humans. A strange feeling I got through the end of Gildert’s talk is that if we’ve created artificial beings with our morals and which identify with us and our history, maybe we don’t really need to worry about being wiped out as a race, since a future parallel to or perhaps even replaced by technology is likely going to be our legacy. Another theme of the conference was that a large upside to coming to conclusions like these is that we can prepare for these possible futures, legally and socially, before it’s too late. On the legal side, having a solid framework for technologies like self-driving cars and robotic doctors before they are everywhere greatly reduces the risk of something going horribly wrong. On a lighter note, I learnt that there are machine learning methods far better than deep learning at playing video games, which I found quite interesting. The winning method was the one that was able to strategize in a very human-like fashion, coming up with the “hit the blocks at the top” strategy for brick breaker all by itself.
Dominik Bednarczyk:
The conference hosted by the Creative Destruction lab were incredibly interesting. I enjoyed how they brought in many experts from different fields with different points of view to discuss a variety of ethical and logical issues surrounding the implementation of AI into our society. Being able to see these experts discuss the topics and to learn which ideas conflicted and what they all unanimously agreed on was extremely interesting. My personal favourites were the AI embodiment session and the Vicarious session because the speakers had the most energy and I liked how personal one of the speakers got. I enjoyed the Embodiment lecture in particular because it showed what we still need to work on and what we have already achieved, which allowed me to think about which fields I might want to invest time and effort in to have the least difficulty finding work and helping to speed up progress in the field I particularly enjoy. The Vicarious speaker gave an amazing presentation about the future and compared AI to different levels of consciousness throughout Earth’s history. There were also many interesting startups present, some of which I already use and others which have huge potential for our society. But obviously the most interesting part was the special guest; Prime Minister Justin Trudeau. Not only was I excited to see that our prime minister supported the idea of implementing AI into Canadian society and his plans to make Canada the AI hub of the world but being able to ask him questions about the future of AI in Canada was especially amazing. Mr. Trudeau discussed the moral implications of self driving cars, which I am personally very passionate about, and the revelation that he studied and enjoyed poetry in University was particularly funny and interesting. All in all the speakers were amazing to listen to, gave great insight into humanity’s future with technology and were presented in such a way that was easy to understand and grasp.
Konstantin Uvarov:
My favourite thing that I encountered in the Creative Destruction Lab – Artificial Intelligence Conference was a start-up company which helps people immigrate from other countries to Canada. I think that this kind of usage of Artificial Intelligence will be very useful because it will allow people to spend less time filling in documents. All that people will have to do is just to type their information and scan their documents for the program and the program will do the rest of the work for them. It will be very beneficial for both sides: people will spend less time doing their immigration paperwork and the government will not need to worry about hiring lots of people to do the same job as the AI program can do. In conclusion, I also think that this start-up company will become, in the near future, a necessity for a Government of Canada because it will not make any mistakes in laws and will be 100% accurate in making decisions.