Rewards
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmeao2PDnTB7bmSuWxyaBUmwBSRAHf2TP47Vt2SpRuEgpS/report4_rewards.png" alt="report4_rewards.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmeao2PDnTB7bmSuWxyaBUmwBSRAHf2TP47Vt2SpRuEgpS/report4_rewards.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmeao2PDnTB7bmSuWxyaBUmwBSRAHf2TP47Vt2SpRuEgpS/report4_rewards.png 2x" /><br />
<center><strong>Figure 1: Gained rewards of the agent. The orange dots show the rewards of each single episode, the blue line shows the average of 500 episodes. The agent run for 75000 steps.
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmd2V7pu2PnyzdM5ZVxk7AVKxNnRzUuVpZzE9KrpHnCDKx/report4_rewards_histogram.png" alt="report4_rewards_histogram.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmd2V7pu2PnyzdM5ZVxk7AVKxNnRzUuVpZzE9KrpHnCDKx/report4_rewards_histogram.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmd2V7pu2PnyzdM5ZVxk7AVKxNnRzUuVpZzE9KrpHnCDKx/report4_rewards_histogram.png 2x" /><br />
<center><strong>Figure 2: Histogram of rewards per episode, the histogram has 10 bins, with a size of 44606.4 each.
<div class="table-responsive"><table>
<thead>
<tr><th>Episode<th>Reward<th>Minerals<th>Gas
<tbody>
<tr><td>2104<td>446064.0<td>2345<td>468
<tr><td>19972<td>437483.9<td>1775<td>556
<tr><td>33434<td>388482.7<td>2375<td>432
<tr><td>60775<td>368278.0<td>1410<td>644
<tr><td>22227<td>356823.1<td>1315<td>452
<tr><td>32624<td>355436.2<td>1545<td>500
<tr><td>21229<td>354093.9<td>2590<td>164
<tr><td>25682<td>352979.6<td>2975<td>0
<tr><td>19094<td>340669.9<td>2930<td>0
<tr><td>32586<td>339692.5<td>3095<td>0
<p dir="auto"><center><strong>Table 1: The top 10 episodes in terms of rewards.
<p dir="auto">Figure 1 shows all the gained rewards for every episode as well as the smoothed average of the rewards. Both plots show a very steep increase in the gained after around 15000 episodes. This lead to a plateau at a reward of around 150000 from around 17000 episodes to around 30000 episodes, where rewards were rather high in average with a few very high outliers. From 30000 episodes to 35000 episodes the average rewards were constantly decreasing, with two spikes, due to very high or rather high rewards. Around 36000 episodes there is another spike in the average rewards, that is around 100000. From that on the rewards reach another plateau at around 40000 till the end. Looking at the plot of all rewards shows that there is a high variance in gained rewards.
<p dir="auto">Figure 2 shows that the vast majority of episodes is in the first bin, meaning that they had a reward from 0 to 44606.4, this also shows that the agent was stuck in a plateau where the reward was around 40000 for most of its runtime, more than 50000 episodes fall in this bin and only had a reward between 178425.6 and 223032 than between 133819.2 and 178425.6, which is not apparent from the plot of all rewards.
<p dir="auto">Table 1 shows the top 10 episodes in terms of rewards. The top 7 episodes are episodes which are a combination of relatively high amounts of collected gas and high amounts of collected minerals, while in the last three episodes only minerals had been collected. The reason why the episode with a higher mineral count is because for the log files I use the last reward and not a sum of all rewards in an episode. By doing so, it is also visible if an agent performed actions that led to a decrease of the reward. This happened e.g. in episode 32586, where the agent started collecting minerals quickly, but then performed actions that didn't contribute to the increase of the reward and thus leading to a lower reward even though the total amount of collected minerals is higher than for e.g. episode 19094. Note that the final reward is only used for logging, for training the agent all cumulated discounted rewards that the agent receives are used.
<h3>Minerals
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmSUgv7yR2qvm7X18ywjNp8CzvqpThWevTQNB7Wz4EX5vA/report4_minerals.png" alt="report4_minerals.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmSUgv7yR2qvm7X18ywjNp8CzvqpThWevTQNB7Wz4EX5vA/report4_minerals.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmSUgv7yR2qvm7X18ywjNp8CzvqpThWevTQNB7Wz4EX5vA/report4_minerals.png 2x" /><br />
<center><strong>Figure 3: Collected minerals of the agent. The orange dots show the rewards of each single episode, the blue line shows the average of 500 episodes. The agent run for 75000 steps.
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmXKyt8kJtFuSc2WfrTK7MguSYDWs6YgKqLDmnincyPos9/report4_minerals_histogram.png" alt="report4_minerals_histogram.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmXKyt8kJtFuSc2WfrTK7MguSYDWs6YgKqLDmnincyPos9/report4_minerals_histogram.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmXKyt8kJtFuSc2WfrTK7MguSYDWs6YgKqLDmnincyPos9/report4_minerals_histogram.png 2x" /><br />
<center><strong>Figure 4: Histogram of collected minerals per episode, the histogram has 10 bins with a size of 309.5 each.
<div class="table-responsive"><table>
<thead>
<tr><th>Episode<th>Reward<th>Minerals<th>Gas
<tbody>
<tr><td>32586<td>339692.5<td>3095<td>0
<tr><td>37906<td>327749.7<td>3045<td>0
<tr><td>30191<td>277834.5<td>3020<td>0
<tr><td>29342<td>337969.1<td>2995<td>0
<tr><td>17437<td>261376.8<td>2980<td>0
<tr><td>25682<td>352979.6<td>2975<td>0
<tr><td>30978<td>258900.1<td>2965<td>0
<tr><td>24885<td>256661.0<td>2960<td>0
<tr><td>30162<td>334053.1<td>2960<td>0
<tr><td>26006<td>316695.7<td>2955<td>0
<p dir="auto"><center><strong>Table 2: The top 10 episodes in terms of collected minerals.
<p dir="auto">Figure 3 looks very similar to figure 1, indicating that the collected minerals are mostly responsible for the gained rewards. Also around 15000 episodes there is a steep rise in the amount of collected minerals, to slightly less than 2500 collected minerals per episode. As for the gained rewards, the agent stays at this plateau until around 30000 episodes, until it steadily decreases to around 700 collected minerals per episode until the end, there is also a spike around 36000 episodes, where it goes up to around 1000 collected minerals per episode. Around 59000 episodes there is also a small spike up to 800 collected minerals per episode, which does not have much influence on the gained rewards. Note that here the variance is also very high, but around the one high plateau, there are almost no very low outliers, meaning that during this plateau the agent practically collected minerals during every episode.
<p dir="auto">Figure 4 has also some similarities to figure 2, but there is one obvious difference: the bin with the most episodes in is from 309.5 to 619, so there are more episodes where the agent collected at least 309.5 minerals than where it collected no minerals. Also the plateau of where the agent collects between 2476 and 2785.5 minerals per episode for around 14000 is easily visible in the histogram. This influences the slight spike in the histogram of the rewards, although since the influence of collected minerals on the total reward is damped, it is not so visible in the rewards histogram.
<p dir="auto">Table 2 shows the top 10 episodes in terms of collected minerals. As mentioned above also here a higher amount of collected gas does not necessarily mean a higher amount of gained reward. It is notable that even though most of the top 10 episodes happened when the agent reached the highest plateau, the three episodes with the most collected minerals happened when the agent left this plateau and the are responsible for the spikes in the figure 3. Episode 17437 might be responsible for the steep increase in gained rewards and collected minerals that is visible in figure 1 and figure 3, together with episodes 19972 and 19094 as shown in table 1 because they gave very high rewards early in the agent's runtime.
<h3>Gas
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmWWQ112emJLjpUTHd3NxRmb8FyLfKCidL3V9r7pKjeA96/report4_gas.png" alt="report4_gas.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmWWQ112emJLjpUTHd3NxRmb8FyLfKCidL3V9r7pKjeA96/report4_gas.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmWWQ112emJLjpUTHd3NxRmb8FyLfKCidL3V9r7pKjeA96/report4_gas.png 2x" /><br />
<center><strong>Figure 7: Collected gas of the agent. The orange dots show the rewards of each single episode, the blue line shows the average of 500 episodes. The agent run for 75000 steps.
<p dir="auto"><center><img src="https://images.hive.blog/768x0/https://steemitimages.com/DQmc1G5nBfSLCEFx2kxaZPtcpWo5gKoKSnygyN6u4imzvZo/report4_gas_histogram.png" alt="report4_gas_histogram.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/DQmc1G5nBfSLCEFx2kxaZPtcpWo5gKoKSnygyN6u4imzvZo/report4_gas_histogram.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/DQmc1G5nBfSLCEFx2kxaZPtcpWo5gKoKSnygyN6u4imzvZo/report4_gas_histogram.png 2x" /><br />
<center><strong>Figure 6: Histogram of collected gas per episode, the histogram has 10 bins with a size of 70.8 each.
<div class="table-responsive"><table>
<thead>
<tr><th>Episode<th>Reward<th>Minerals<th>Gas
<tbody>
<tr><td>47860<td>152955.0<td>760<td>708
<tr><td>60775<td>368278.0<td>1410<td>644
<tr><td>18985<td>334472.5<td>1595<td>636
<tr><td>58515<td>244735.3<td>1100<td>604
<tr><td>61472<td>187292.0<td>665<td>588
<tr><td>67944<td>213293.6<td>915<td>584
<tr><td>6113<td>106560.4<td>480<td>580
<tr><td>62611<td>160610.8<td>355<td>576
<tr><td>19972<td>437483.9<td>1775<td>556
<tr><td>4370<td>127439.7<td>640<td>556
<p dir="auto"><center><strong>Table 3: The top 10 episodes in terms of collected gas.
<p dir="auto">Figure 5 shows how much gas an agent collected per episode, was well as a smoothed average. The variance is even higher than for rewards or collected minerals. The smoothed average is very low and usually around 0, but there are some very high outliers. The smoothed average is oscillating between 0 and 20 gas collected per episode. This indicates that the agent has not learned how to collect gas and that the higher amounts of collected gas are the product of the random exploration. Looking at figure 6, the histogram of collected gas, strengthens this assumption: over 70000 episodes collected between 0 and 70.8 gas.
<p dir="auto">Table 3 shows the top 10 episodes in terms of collected gas. This table again highlights the rather random nature of the agent's gas collection. Even though collecting gas is higher valued than collecting minerals, the rewards of the top episodes for collected gas can not compete with the top episodes for collected minerals in terms of gained rewards, with the exception of episodes that also have a high amount of collected minerals, especially again episode 19972.
<h3>Analysis of Actions
<div class="table-responsive"><table>
<thead>
<tr><th>Action<th>Occurrences
<tbody>
<tr><td><code>Harvest_Gather_screen<td>53.57%
<tr><td><code>no_op<td>18.073%
<tr><td><code>select_point<td>5.2032%
<tr><td><code>move_camera<td>5.1394%
<tr><td><code>select_idle_worker<td>3.6681%
<tr><td><code>Move_minimap<td>3.3873%
<tr><td><code>Move_screen<td>3.3578%
<tr><td><code>Build_Refinery_screen<td>2.2372%
<tr><td><code>Build_SupplyDepot_screen<td>1.8124%
<tr><td><code>Build_CommandCenter_screen<td>0.80255%
<tr><td><code>Rally_Workers_screen<td>0.75193%
<tr><td><code>Rally_Workers_minimap<td>0.74176%
<tr><td><code>Harvest_Return_quick<td>0.55428%
<tr><td><code>Morph_SupplyDepot_Lower_quick<td>0.36859%
<tr><td><code>Morph_SupplyDepot_Raise_quick<td>0.33171%
<p dir="auto"><center><strong>Table 4: Total frequency of Actions.
<div class="table-responsive"><table>
<thead>
<tr><th>Action<th>Occurrences
<tbody>
<tr><td><code>move_camera<td>5.1196%
<tr><td><code>no_op<td>5.1023%
<tr><td><code>select_point<td>5.0814%
<tr><td><code>select_idle_worker<td>3.6396%
<tr><td><code>Harvest_Gather_screen<td>3.4297%
<tr><td><code>Move_minimap<td>3.3873%
<tr><td><code>Move_screen<td>3.3578%
<tr><td><code>Build_Refinery_screen<td>2.2372%
<tr><td><code>Build_SupplyDepot_screen<td>1.8124%
<tr><td><code>Build_CommandCenter_screen<td>0.80255%
<tr><td><code>Rally_Workers_screen<td>0.75193%
<tr><td><code>Rally_Workers_minimap<td>0.74176%
<tr><td><code>Harvest_Return_quick<td>0.55428%
<tr><td><code>Morph_SupplyDepot_Lower_quick<td>0.36859%
<tr><td><code>Morph_SupplyDepot_Raise_quick<td>0.33171%
<p dir="auto"><center><strong>Table 5: Total frequency of Actions.
<div class="table-responsive"><table>
<thead>
<tr><th>Action<th>Occurrences
<tbody>
<tr><td><code>Harvest_Gather_screen<td>50.141%
<tr><td><code>no_op<td>12.971%
<tr><td><code>select_point<td>0.12185%
<tr><td><code>select_idle_worker<td>0.02849%
<tr><td><code>move_camera<td>0.019841%
<p dir="auto"><center><strong>Table 6: Frequency of non random Actions.
<p dir="auto">In order to keep the file size of the logfiles reasonable only the actions for the top 10 episodes in terms of collected minerals and collected gas and the last 10 episodes where kept. For 16 agent instances this makes a maximum of 480 episodes. In my case 468 episodes were analysed with a total of 393120 actions.
<p dir="auto">Table 4 shows the frequency of all actions, table 5 shows how many of them were random and table 6 shows how many of them were not random. Especially table 6 shows that the agent learned the relation between performing the action <code>Harvester_Gather_screen and getting a high reward and therefore tried to perform this action over 50% of the time. It also learned that sometimes doing nothing is also beneficial for achieving this goal, so <code>no_op is the second most frequent performed action. The actions for selecting: <code>select_point and <code>select_idle_worker were also learned from the agent, but only performed in very few cases. Taking a look at the action logs shows that the agent needed to perform a random select action before it was able to send worker units harvesting. This means that the agent did not learn the rule that it has to first select worker units, before it can send them to harvest. It is notable that the action <code>select_idle_worker gets less often chosen by the agent than the action <code>select_point, even though this action is easier to perform, since it requires no position. The agent was not able to learn the relation between building a refinery and collecting gas, since the action for building a refinery (<code>Build_refinery_screen) only gets performed as random action.
<h4>Running the Agent on a different Map
<p dir="auto">The trained agent was also run on different maps than the one it was trained, but without much success. Since the training map did not require any exploring of the map1, the agent was not really able to perform much exploration of the map for finding new resource patches, so it was just performing the same actions as it was doing on the training map, so it lead to the same results.
<h3>Interpretation of the Results
<p dir="auto">As written above, the agent was able to learn the relation between giving the command <code>Harvester_gather_screen and an increase in reward, but the agent was not really able to learn the rule that it has to select a worker unit before it can give the harvest command. One reason for that could be that episodes where the <code>select_idle_worker was given early on only returned relatively small final rewards. For example the episodes 63074, 34261 and 64866 the <code>select_idle_worker command was given quite early, but the total amount of collected minerals was rather low (except for episode 64866), also the final reward wasn't as high as one would expect. This is due to the agent ordering the worker units to harvest early on, but then gives other commands, which keeps them from harvesting. Another special case is episode 37906, it was the second best episode in terms of collected minerals and also the reward was quite high. In this episode the agent didn't give much <code>useless commands, that kept the worker units from collecting minerals. The reason why I was only looking at the <code>select_idle_worker command is, because there it is more likely that the agent actually selects a worker unit, for select point this can not be said, since the agent could also select a random point and so selects no unit.<br />
The agent not learning the relation between selecting a worker unit before it can give the harvesting command also explains why the agent wasn't able to learn how to collect gas. In order to collect gas it had to also learn the relation between building a refinery at the right position, something it also didn't do.
<p dir="auto">The agent performing some select actions, and especially that those actions were performed rather early, indicate that the agent would have been able to learn this correlations, if given more time. After 75000 episodes the agent was stuck in a local minimum and the average rewards as well as the average collected minerals per episode were plateauing. It is difficult to say, how long this plateau would have lasted, but given that the agent reached a higher plateau before it is likely that the agent would have left that local minimum.
<h2>Conclusion
<p dir="auto">Running and observing the behaviour of the agent leads to the conclusion, that the state space of StarCraft II is so large, that it can not be efficiently explored by a reinforcement learning algorithm without specialised hardware, even though with the A3C algorithm a reinforcement learning algorithm that is known to have rather low resource demands was chosen.<br />
The agent showed some "intelligent" behaviour, especially since it learned the relation between harvesting resources and an increase in reward. From the action logs of the agent however it is obvious that the agent did not learn that it had to select a worker unit before it can give the command to harvest minerals nor did it learn how to harvest gas. The data from the action logs however also suggests that the agent could learn the relation of selecting a worker unit before it can give the harvesting command, if it is trained for a longer time.
<p dir="auto">Two things that possibly could speed up this convergence process are using the single_select tensor as input as well as performing an update after <em>n steps instead of after every episode. The former would give the agent a more direct feedback whether a select action was successful (so far this feedback is only given indirectly by whether more actions become available), while the latter would make improvements in the action policy faster available to the agent.
<p dir="auto">Another possibility would have been to give the agent more guidance in the learning process. It might would have made the agent faster in solving the task, but it would also required more intervention from my side and would override one big advantage of reinforcement learning: that the agent is able to find a solution with little to no prior knowledge build into it.
Good.. project.. congratulation...
Thanks :)
this is a bot/spammer I think)
You are rising a good point (with both of your comments), but I think that both are rather newbies than spammers.
just saying, nevermind)
U r interesting only in posting Ur articles into blockchain or making some money too?
I'm asking it 'cause U have not that much posts.
I'm interested in both, but unfortunately I have a lot of other things to do, so writing articles for Steem tends to fall a bit short.
Finishing this project is a good example: while I was doing so (in March), I didn't find time to write or post anything.
in case to increase rewards (slightly=) - just make more posts.
imho.
Cool article, thanks to the author for his outstanding thoughts, Respect you
Thank you :)
compare this message of him with previous one in your post)
Congratulations @cpufronz! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
<p dir="auto"><a href="http://steemitboard.com/@cpufronz" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link"><img src="https://images.hive.blog/768x0/https://steemitimages.com/70x80/http://steemitboard.com/notifications/votes.png" srcset="https://images.hive.blog/768x0/https://steemitimages.com/70x80/http://steemitboard.com/notifications/votes.png 1x, https://images.hive.blog/1536x0/https://steemitimages.com/70x80/http://steemitboard.com/notifications/votes.png 2x" /> Award for the number of upvotes <p dir="auto">Click on any badge to view your own Board of Honor on SteemitBoard.<br /> For more information about SteemitBoard, click <a href="https://steemit.com/@steemitboard" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">here <p dir="auto">If you no longer want to receive notifications, reply to this comment with the word <code>STOP <blockquote> <p dir="auto">Upvote this notification to help all Steemit users. Learn why <a href="https://steemit.com/steemitboard/@steemitboard/http-i-cubeupload-com-7ciqeo-png" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">here!