top of page
  • Google+ Social Icon
  • Twitter Social Icon
  • LinkedIn Social Icon
  • Facebook Social Icon
Search

Latest Results

  • Yona & Orr
  • Jan 6, 2017
  • 1 min read

After long months of work on construction of our interface and on bringing the system up to speed, in the past few weeks we have been able to work on fine-tuning our parameters and bringing our agent's scoring percentage to a competitive level.

This was partly done by adding multiple upgrades to our DQN code, which have been implemented on different domains and have shown improvements in results. Some of them are:

Below are some samples of our results (more to follow soon!):

Empty goal scenario:

Scoring percentages for this scenario have been especially high (even beating some benchmark tests of other systems).

Video of the agent scoring on an empty goal, towards the end of training:

Scoring percentage plot:

1 agent vs. goalie scenario:

Here is a video of the run, as seen sometime around the beginning of the training session (higher quality coming soon):

1 agent vs. goalie and defender scenario:

Scoring percentage plot:

Video coming soon!

Stay tuned for more results and deeper explanations on all scenarios.


 
 
 

Recent Posts

See All
Background

Reinforcement Learning: A reinforcement learning agent attempts to learn a policy π : S → A by maximizing the reward it receives in...

 
 
 
RECENT POST

© 2017 by Yona Cohen & Orr Krupnik.  Proudly created with Wix.com

bottom of page