A writer and a software engineer from Google's People + AI Research team explore the human choices that shape machine learning systems by building competing tic-tac-toe agents.
…
continue reading
What have we learned about machine learning and the human decisions that shape it? And is machine learning perhaps changing our minds about how the world outside of machine learning — also known as the world — works? For more information about the show, check out pair.withgoogle.com/thehardway/. You can reach out to the hosts on Twitter: @dweinberg…
…
continue reading
Yannick and David’s systems play against each other in 500 games. Who’s going to win? And what can we learn about how the ML may be working by thinking about the results? See the agents play each other in Tic-Tac-Two! For more information about the show, check out pair.withgoogle.com/thehardway/. You can reach out to the hosts on Twitter: @dweinber…
…
continue reading
David’s variant of tic-tac-toe that we’re calling tic-tac-two is only slightly different but turns out to be far more complex. This requires rethinking what the ML system will need in order to learn how to play, and how to represent that data. For more information about the show, check out pair.withgoogle.com/thehardway/. You can reach out to the h…
…
continue reading
David and Yannick’s tic-tac-toe ML agents face-off against each other in tic-tac-toe! See the agents play each other! For more information about the show, check out pair.withgoogle.com/thehardway/. You can reach out to the hosts on Twitter: @dweinberger and @tafsiri.People + AI Research による
…
continue reading
1
Give that model a treat! : Reinforcement learning explained
26:04
26:04
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
26:04
Switching gears, we focus on how Yannick’s been training his model using reinforcement learning. He explains the differences from David’s supervised learning approach. We find out how his system performs against a player that makes random tic-tac-toe moves. Resources: Deep Learning for JavaScript book Playing Atari with Deep Reinforcement Learning …
…
continue reading
1
Beating random: What it means to have trained a model
17:14
17:14
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
17:14
David did it! He trained a machine learning model to play tic-tac-toe! (Well, with lots of help from Yannick.) How did the whole training experience go? How do you tell how training went? How did his model do against a player that makes random tic-tac-toe moves? For more information about the show, check out pair.withgoogle.com/thehardway/. You can…
…
continue reading
Once we have the data we need—thousands of sample games--how do we turn it into something the ML can train itself on? That means understanding how training works, and what a model is. Resources: See a definition of one-hot encoding For more information about the show, check out pair.withgoogle.com/thehardway. You can reach out to the hosts on Twitt…
…
continue reading
1
What does a tic-tac-toe board look like to machine learning?
23:26
23:26
「あとで再生する」
「あとで再生する」
リスト
気に入り
気に入った
23:26
How should David represent the data needed to train his machine learning system? What does a tic-tac-toe board “look” like to ML? Should he train it on games or on individual boards? How does this decision affect how and how well the machine will learn to play? Plus, an intro to reinforcement learning, the approach Yannick will be taking. For more …
…
continue reading
Welcome to the podcast! We’re Yannick and David, a software engineer and a non-technical writer. Over the next 9 episodes we’re going to use two different approaches to build machine learning systems that play two versions of tic-tac-toe. Building a machine learning app requires humans making a lot of decisions. We start by agreeing that David will…
…
continue reading
Introducing the podcast where a writer and a software engineer explore the human choices that shape machine learning systems by building competing tic-tac-toe agents. Brought to you by Google's People + AI Research team. More at: pair.withgoogle.com/thehardwayPeople + AI Research による
…
continue reading